Re: Normalizing vs. Denormalizing

From: Dr.Gerhard Hergesell <hg2475_at_m30x.nbg.scn.de>
Date: 1996/03/19
Message-ID: <4imm7k$n56_at_centurio.m30x.nbg.scn.de>#1/1


Serious question, but some attempts to answer it:
- It's not so good to squeeze all data of a "record" into actually

   one table since you will get longer records which will decrease    the number of records per (disk) block read.
- Moreover, if redundant data is stored in multiple records multiple

   times, this will increase the net space needed even more.
- So, check your expected data for possible "entities" and possible

    data that show up in different "records" in exactly the same way.     These are primary candidates for separate tables.
- On the other side, if you discover data that are different in each

    "record" and normalizing does not promise any gain in effective     storage, you m a y consider not normalizing.
- One way to get around the whole problem is to enlarge the shared memory
  (SGA) to hold as many data as possible, thus avoiding unnecessary   "disk" access.
- A final remark: 5000 "records" per "month" is not "high performance".
  5000 "records" per half-hour is.

Dr. Gerhard A. Hergesell
SIEMENS AG AUT 65
Nuremberg, Germany
Email: hg2475_at_m30x.nbg.scn.de
Phone: (+49) 911-895-2475 Received on Tue Mar 19 1996 - 00:00:00 CET

Original text of this message