Re: Data management algorithm needed
Date: Thu, 03 Jan 2002 01:12:45 -0800
Message-ID: <pan.2002.01.03.01.12.38.125.2345_at_yahoooo.com>
In article <3C30B611.34D7B51B_at_no_spam_today_thanks.com>, "Chris" <spamfree_at_no_spam_today_thanks.com> wrote:
> I need to manage random-sized blocks of data: fetch them quickly, read
> them, write them, etc. When reading and writing such blocks into memory
> or into a file, there is the problem of fragmentation -- when you delete
> a block, how do you re-use the space, particularly when all blocks are
> of odd sizes?
>
> I know that databases typically handle the problem by organizing
> everything into data pages, where each pages has some free space, and
> when the space is filled the page is split. There are different ways of
> organizing and indexing the pages.
>
The row/page (or row/block) paradigm is basically a bet that each object (row) will be roughly the same size, with little deviation. Grouping into blocks means that sizes can average out, with some shared headroom. It's not the best thing for highly variable-size objects.
> Does anyone know of a good survey of the techniques for solving this
There's a book called
> class of problems? Perhaps a book or a chapter of a book?
_Garbage_Collection:_Algorithms_for_Dynamic_Memory_Management. I haven't
read it, but it gets good reviews. See Amazon.
Received on Thu Jan 03 2002 - 10:12:45 CET