Help with free-ware db for web use.

From: <scott_at_scam.XCF.Berkeley.EDU>
Date: 1997/09/02
Message-ID: <5uhvr3$825_at_agate.berkeley.edu>#1/1


I'm toying with putting together a simple little rdb engine in perl and java for use in simple applications on the web. One feature I'd like to implement, I think, would be field indexing so that queries on key fields can be reasonably fast.

I'm not really sure how indexing is implemented in commercial rdb's, but I figured one method I might try is to generate a hashtable for each indexed field. A hashtable would have the hash, key field value, and a recordNumber pointing to the master data table. The recordNumber would allow the relevant record to be retrieved in order 1 time while enabling me to avoid duplication of the entire record in the hashtable.

Hashtables would initially be perfect, but as records are inserted, individual hash keys might resolve to multiple records. The user would have the option of re-indexing the table from time to time to perfect all the hash tables.

My problem is: an insert or delete with the current scheme would require that all hashtables be scanned and all recordNumbers be adjusted accordingly. This is ridiculous, of course.

So, is there a clever way I can use hashtables to support indexed fields without having to duplicate all the data in each hashtable? Do commercial rdb's modify their indexing every time a new record is inserted or deleted?

Scott Received on Tue Sep 02 1997 - 00:00:00 CEST

Original text of this message