Version française
Home     About     Download     Resources     Contact us    
Browse thread
Optimizing garbage collection
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Eray Ozkural <examachine@g...>
Subject: Re: [Caml-list] Re: Optimizing garbage collection
On Mon, Nov 22, 2010 at 6:42 PM, Sylvain Le Gall <sylvain@le-gall.net>wrote:

> On 22-11-2010, Damien Doligez <damien.doligez@inria.fr> wrote:
> >
> > On 2010-11-21, at 20:26, Eray Ozkural wrote:
> >
> >> I've been thinking whether some kind of doubling strategy would work for
> the minor heap size. What do you think?
> >
> > Sounds like an interesting idea, but what heuristic would you use?
> > When everything is smooth, the running time decreases something like
> > exponentially with the minor heap size, so you'd always want to
> > increase the size.  How do you tell when to stop?  And then, if the
> > program is not behaving uniformly, when do you decide to reduce
> > the size?
> >
>
> How do you tell when to stop?
> ->
>
> Maybe you can stop when you reach (the size of the L2/L3 cache of the
> processor) / number of core.
>
> Both information are quite straight to read from /proc/cpuinfo.
>


Yeah that's what I had in mind, determine a kind of sensible upper bound to
grow to. Cache size makes some sense, though I think as recently mentioned
"working set size" is relevant. If the garbage collector could deduce that
it could be used, the other suggestion is also sensible. You could also set
it to something like 1/4 of physical RAM. That kind of logic is used in some
out-of-core data mining algorithms.

The objective here is to amortize the cost of copying until the working set
size is reached, otherwise there will be disk thrashing anyway!

Best,

-- 
Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
http://groups.yahoo.com/group/ai-philosophy
http://myspace.com/arizanesil http://myspace.com/malfunct