Re: Design Question

From: Jay Kash <>
Date: Sat, 17 Oct 2009 14:30:39 -0500
Message-ID: <BLU117-DS50B32DB4EF6933550AE46ACC30_at_phx.gbl>

Tuning for performance is often feasible, but correcting bad data can get really messy.
What good is bad data that performs pretty well!


From: "Nuno Souto" <> Sent: Friday, October 16, 2009 7:12 PM
Cc: <>
Subject: Re: Design Question

> Balakrishnan, Muru wrote,on my timestamp of 17/10/2009 2:58 AM:
>> My argument is, production hardware is not cheap (we can buy 1TB for home
>> under $100, but production hardware costs thousands), less overall blocks
>> used improves performance, negligible problem with joining lookup tables.
> Completely in agreement. Denormalization might save joins but I have yet
> to see a case where it saved on data.
> In fact, the opposite is generally the case: it greatly increases the
> amount of data that needs to be stored and therefore the amount of I/O
> used to manage it.
> If that increase conterbalances any perceived or actual overhead of joins
> is wide open for debate and there is no final answer: each case has to be
> examined on its own conditions.
> Normalization was not "invented" to save disk space. It was initially
> intended to save the amount of I/O one has to perform to manage or
> retrieve any given information.
> "Amount of I/O" is not the same as "disk space" and I know for sure which
> one causes performance problems.
> --
> Cheers
> Nuno Souto
> in sunny Sydney, Australia
> --

Received on Sat Oct 17 2009 - 14:30:39 CDT

Original text of this message