Is denormalizing via views bad (read: slow)?

From: Jason Glover <jglover_at_xtra.co.nz>
Date: 17 Mar 2002 18:17:48 -0800
Message-ID: <27b86948.0203171817.27d460a3_at_posting.google.com>



I am currently reviewing how one of our databases addresses the complicated issue of storing and calculating pricing structures involving:
  • contract prices
  • quantity breaks
  • other various discounting methods

Having just completed a thorough normalization of the current system I have the world's most elegant logical ER diagram encorporating the world's most flagrant use of sub-classing.

What I wondering - before I go ahead and do it anyway - whether it is more wise to:

  1. Implement the physical model fairly close to the logical. Create humungously verbose stored procedures to churn prices out of this fantastical database structure.
  2. Implement the physical model fairly close to the logical. Build a swath of views to denormalize the data a fair amount, thus allowing considerably easier to follow stored procedures.
  3. Implement a physical model that slightly denormalized version of my sexy logical model, encorporating whatever "hacks" may be required to remove the need for views or jugernaut stored procedures but that unshamedly prostitute the perfection of my logical DM.

Have I perhaps had too much coffee?

Personally I prefer the legability of (b), but I am wondering if the trade-of of all those nested views is ultimately going to hurt performance.

Regards,

JBoy Received on Mon Mar 18 2002 - 03:17:48 CET

Original text of this message