Re: How to efficiently make an "history" ?

From: Todd Gillespie <toddg_at_linux128.ma.utexas.edu>
Date: Thu, 30 Aug 2001 16:55:49 +0000 (UTC)
Message-ID: <9mlr6l$g71$1_at_geraldo.cc.utexas.edu>


kesako <kesako_at_mail.com> wrote:
: This is because *it* aims to be narrative! I wanted to express the concept I
: wanted to reach. As you most probably know it, personal narrative is much
: less accurate but much more powerfull to express idea. Try to tell your
: girlfriend you love her in DDL ;)

"Wandering Explanation, Pointless Metaphor"

:> Secondly, SQL has no "fields"  -- tables are made of
:> rows, rows are made of columns and columns hold values.  This is
:> important!!

: I didn't know rows were made of columns. I thought that tables were made of
: rows *and* columns.

A is in B, B is in C, thus C is comprised of B and A. columns are in rows, rows are in tables, tables are comprised of rows and   columns.

Correct, of course, but that doesn't affect columns being a part of rows.

: Why are you seem allergic to the "field" word? We both seem to understand
: what it means and I don't see a possible confusion with something in SQL.

Celko has made a living by being accurate.

:> >> Furthermore, when a user, from the front-end, updates a particular
:> column [sic: field], we want to have a revision commentmand to keep
:> all old data i.e. change 'name' from 'james' to 'bob' with revision
:> comment 'he prefers shorter names...'. <<
:>
:> Don't do this -- use a log file if you need to follow the history and
:> keep a separate text file with this useless commentary.

: I *need* to keep old values, all, and why they have been changed. This is a
: business requirement.

Like he said, keep a log file.

: The exactly same case exists in the retail business: you want to keep the
: historic (maybe not the good term...again) of all prices a particular item
: had at a particular moment. You want to be able to query this, make
: calculation, ...

: And what is the advantage of a text file over a SQL database in this case?

You don't have terabytes of old rows sitting around in OLTP tables.

: in one year, we may imagine that we will have about 100'000 rows and, after
: a "data feeding period" (terminology...again), we will have relatively
: minimal data activity... a few hundreds new record each day is a maximum.

: I have just artificially populated a database with 600'000 records of the
: previous pseudo-description and made some of the queries I've described
: above and it runs quite well (subjective judgement).

Please tell us what the objective judgement is.

: I continue my investigations.

I recommend investigating your RDBMS's support for partitioning. If it is not too clunky, partition all your tables into 'current' and 'historical' areas. Then you can keep all the historical rows around, but queries on current data will remain fast. Received on Thu Aug 30 2001 - 18:55:49 CEST

Original text of this message