Re: Is denormalizing via views bad (read: slow)?

From: Neil Burnett <neil_at_efc.be>
Date: 18 Mar 2002 11:45:31 -0800
Message-ID: <ea0243b5.0203181145.30cfe261_at_posting.google.com>


jglover_at_xtra.co.nz (Jason Glover) wrote in message news:<27b86948.0203171817.27d460a3_at_posting.google.com>...
> I am currently reviewing how one of our databases addresses the
> complicated issue of storing and calculating pricing structures
> involving:
>
> - contract prices
> - quantity breaks
> - other various discounting methods
>
> Having just completed a thorough normalization of the current system I
> have the world's most elegant logical ER diagram encorporating the
> world's most flagrant use of sub-classing.
>
> What I wondering - before I go ahead and do it anyway - whether it is
> more wise to:
>
> a) Implement the physical model fairly close to the logical. Create
> humungously verbose stored procedures to churn prices out of this
> fantastical database structure.

I am interested why this should be so? Surely if the design is good, then you shouldn't need any stored procedures to extract information (facts?) at all? Or am I just being idealistically naive:-)

'Normalised designs can still be crap of course', as my RDB tutor said; maybe these sub-classes you mention are simply saving on diagram space, but not improving on the accessibility of the data?

I too am in the process of changing some old designs and have been experimenting with the sub classes approach. Sometimes its fine, but most of the time it leads to a very tricky and convoluted user interface and I wish I had just put more entities and relationships into my diagram and avoided this desire to increase abstraction (that's what subclassing often does, thereby reducing entity and relationship definition) brought on by reading too many object oriented books:-) --Neil Received on Mon Mar 18 2002 - 20:45:31 CET

Original text of this message