Re: The TransRelational Model: Performance Concerns
Date: Fri, 12 Nov 2004 19:35:01 +0100
Message-ID: <hb0ap0hlndp11nu3ig7rgulen91diq5hrf_at_4ax.com>
On Fri, 12 Nov 2004 17:06:38 GMT, "Dan" <guntermann_at_verizon.com> wrote:
> that
>> the implementation of the TRM tables don't use BTrees or hashing
>> techniques
>
>Wasn't the paper quite explicit that binary searches over binary relations
>were used in contrast to B*Trees?
>This is obviously not the case after closer reading. But a linked, ordered
>reconstruction table bothers me. For relations of large degree, there seems
>to me to be a fixed overhead in traversing all attributes.
The ring structure is only one of the possible structures we may use.
>, that a relation or a significant part of a 100000 tuples
>> relation don't fit in main memory, he only considers the disk page
>> loading costs, etc.
>>
>But he makes the same assumption with the conventional relational DBMS as
>well. This puts it on equal footing doesn't it?
But this is not a realistic assumption and he did not study what happens if we have a lot of RAM. The analisys is very biased.
It is unthinkable to have a DB server with less than several GBs of RAM in these days. This is more than enough for most little company's databases.
It would be rather affordable (for a big company) to build a Tera Byte RAM DB Server with a cluster of PC boards, and it will be a lot more affordable in a near future with the 64 bits PC architectures.
Although we only need RAM enough for a significant part of the most used tables to avoid the most part of the disk loads.
>> He also assumes that we need to make room for the inserts and the room
>> is not already there.
>>
>If field value tables are really ordered domains, any new value in each one
>will require overhead to place the new value in the correct spot, no?
We can find an empty slot quickly with an efficient search algorithm.
Regards Received on Fri Nov 12 2004 - 19:35:01 CET
