Version française
Home     About     Download     Resources     Contact us    

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
Impact of GC on memoized algorithm
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2005-03-30 (15:03)
From: Alex Baretta <alex@b...>
Subject: Re: [Caml-list] Impact of GC on memoized algorithm
Joh Harrop wrote:

>>> This is rather unlikely. The key to the hashtable is a unique integer...
> So not a [0|1] list like the /. troll then?  ;-) 

No, definitely not. Actually, I am trying to use as much of the bonuses 
of functional programming to further speed up a fairly fast (but, alas, 
worst-case exponential) cutting stock algorithm. The present one works 
as a bytecode library to our AS/Xcaml application server, and provides 
performance comparable to that of the most widely available commercial 
implementations. Yet, it is necessary to do better to win the market 
over, and it it takes a little more effort than switching from ocamlc to 
ocamlopt ;)

>>> Rather, what happens, time-wise, if I create a hashtable with 4096 slots
>>> and end up filling it with several million key-value pairs?
> The hashtable will dynamically double its size each time it feels full. This 
> incurs an O(n) cost at exact solutions of n = 2^p - 1 for integer p>0, IIRC. 
> This will cause the program to stutter but should not adversely impact 
> overall performance. This would be a problem for real-time applications.

Marcin 'Qrczak' Kowalczyk wrote:
> "Alexander S. Usov" <> writes:
> No, OCaml's hash tables are resized automatically.

Ok. So, just as I expected, I am guaranteed that I have no hash 
conflicts desperately degrading the performance of my algorithm. But 
what is the amortized complexity of an insertion into a resizable 
hashtable? Am I right in stating that it is O(log n)? Or is it maybe 
O(n) due to saturation of number of buckets available to the hashtable? 
Then, in this case, I would need to expand the number of buckets by 
allocating a super-hashtable implemented as a hashtable of hashtables 
(as someone already suggested) or an array of hashtables.

And, then again, how does the Gc.full_major scale as the "cache" for the 
algorithm fills up with millions of key-value pairs? Is the GC linear on 
the number of *reclaimed* blocks, or is it linear in the *total* number 
of allocated blocks?


Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)

The FreerP Project