Version française
Home     About     Download     Resources     Contact us    

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
Impact of GC on memoized algorithm
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2005-03-30 (09:41)
From: Alex Baretta <alex@b...>
Subject: Impact of GC on memoized algorithm
I am trying to fine tune a "cut stock" optimization algorithm which I 
have written for Ocaml. It so happens that the vanilla recursive 
implementation is quite fast, relatively to other implementations I've 
seen, whereas the memoized implementation is very slow. The vanilla 
implementation has a memory footprint of only a few megabytes, whereas 
the memoized version takes up some 60 MB on the same input. Quite 
obviously, the hashtable used to cache the results suffers from the 
combinatorial explosion of possible inputs. Now, given this, it is 
expected that the memoized version be as slow or slightly slower than 
the non memoized version, but it is not obvious why it would be a couple 
of orders of magnitude slower.

I have come to think that the difference in performance might be 
attributable to the garbage collector. This might be the case if the 
complexity of a collection is proportional to the total allocated 
memory, because the overall allocation/reclaiming cost of a fresh block 
would be proportional to the total number of allocated blocks.

Is my diagnosis correct? That is, is the GC collection a O(n) algorithm? 
  Would the C-language malloc/free model be any faster here? As far as I 
know malloc keeps a linked list of free blocks, which is also O(n).


Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)

The FreerP Project