Version française
Home     About     Download     Resources     Contact us    

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
[1/2 OT] Indexing (and mergeable Index-algorithms)
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2005-11-17 (22:10)
From: Brian Hurt <bhurt@s...>
Subject: Re: [Caml-list] [1/2 OT] Indexing (and mergeable Index-algorithms)

On Fri, 18 Nov 2005, skaller wrote:

> On Thu, 2005-11-17 at 12:08 -0600, Brian Hurt wrote:
>>> I'm not sure what it is we disagree on.
>> They don't ever need global rebalancing.
> Yup, they do not need it to maintain the invariant
> you stated. But performance can still benefit from it
> significantly.
> Two quite distinct BTrees can contain the same data exactly,
> it depends on the order of insertion of keys. If you are
> clever, you can fill up a BTree so every block is exactly
> full .. however you don't need to be clever to get the worst
> possible BTree -- just fill an empty tree with sorted
> data and almost all the blocks are guaranteed to be half
> full (the worst possible case for storage use and
> access time). Yet, this is a common case in practice.
> ** all the blocks will be half full except those on
> the right edge of the tree -- for a tree of depth 5
> that's millions of half full blocks, and only 5 that
> can possibly be more than half full :)

This is the worst possible case- that each block is half full.  Which 
means that instead of log_k(N) blocks, you're having to touch log_{k/2}(N) 
blocks.  This means that if N=2^32 and k=256, that you need to read 5 
blocks instead of 4 (128^5 = 2^35).  And the number of blocks you need has 
about doubled.  Also note that the binary search per block is now cheaper 
(by one step), and the cost of inserting elements is half.

So the question becomes: is the performance advantage gained by 
rebalancing worth the cost?

If I was worried about it, I'd be inclined to be more agressive on merging 
and splitting nodes.  Basically, if the node is under 5/8th full, I'd look 
to steal some children from siblings.  If the node is over 7/8th full, I'd 
look to share some child with siblings.  Note that if you have three nodes 
each 1/2 full, you can combine the three into two nodes, each 3/4th full. 
You want to keep nodes about 3/4th full, as that makes it cheaper to add 
and delete elements.

> The version I played with did 'scrolling', where
> instead of splitting an overfull block, you scroll
> a key (and child) across to one of its siblings
> via the parent. This is quite tricky ..  but it
> fixes the sorted insertion problem nicely --
> with this modification, all the blocks are
> always FULL .. except those on the right most
> edge and their left siblings: they grow until
> they're both full, then the rightmost one (only)
> is split. So at worst, you waste 5 blocks
> out of millions.. basically this doubles the
> capacity of the tree (built from a sorted list).

Two problems with this: first, what happens when the sibling is full too, 
you can get into a case where an insert is O(N) cost, and second, this is 
assuming inserts only (I can still get to worst-case with deletes).