English version
Accueil     À propos     Téléchargement     Ressources     Contactez-nous    

Ce site est rarement mis à jour. Pour les informations les plus récentes, rendez-vous sur le nouveau site OCaml à l'adresse ocaml.org.

Browse thread
thousands of CPU cores
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2008-07-11 (15:09)
From: Jon Harrop <jon@f...>
Subject: Re: [Caml-list] thousands of CPU cores
On Friday 11 July 2008 15:03:48 Basile STARYNKEVITCH wrote:
> It is not specific to Linux (and probably not even to *opensource*
> functional programming languages; I believe proprietary functional
> languages implementations face the same problems).

Indeed, Mathematica has the same problem but, I believe, Wolfram Research are 
migrating it to the JVM for this reason.

> In my perception, 
> functional programming requires *blindly fast* memory allocation for
> values which are becoming garbage quickly. This seems a property of 
> functional programming (and more generally any programming style
> discouraging side effects), in other words functional programming need
> very efficient garbage collectors (A.Appel wrote stuff on this almost
> 20? years ago).

Although that is established functional folklore, I believe it is misguided to 
try to apply that to more mainstream concerns. Moreover, the problem can be 
largely avoided by adopting a more modern JIT-based approach to language 
implementation anyway.

OCaml and its ancestors and relatives like Haskell have traditionally been 
used by academics for applications with the value lifetime distribution that 
you describe (very high allocation rates for short lived values) when it is 
not unusual to see 30% CPU time spent in the GC.

However, OCaml really pioneered the use of this family of languages in 
completely different applications such as numerical methods for scientific 
computing thanks to OCaml's unusually good floating point performance. Such 
applications do not share the characteristic that you describe but they still 
benefit enormously from first-class functions, tail calls, an expressive 
static type system and so on. These applications benefit far more from good 
code generation than from a fast GC and it is now unusual to see >5% CPU time 
spent in the GC for most OCaml programs.

Type specialization during JIT compilation removes the need for a uniform 
run-time representation of values which, amongst other things, obviates all 
boxing of floats. Value types allow custom data structures to be stored 
unboxed when appropriate (e.g. complex numbers).

This is why F# can be so productive for high-performance numerics even though 
it is built upon a run-time that was specifically designed for C#'s 

> As a case in point, I suggest an experiment (which unfortunately I don't
> have the time or motivation to realize). Replace the current Ocaml GC
> either in bytecode or in nativecode ocaml by Boehm's collector (which is
> multithread compatible). I'm sure you'll get a significant performance
> loss, but you should gain the true multi-threading feature. Of course,
> synchronization issues will appear, very probably in application code
> (and some C function wrappers).

That is an interesting idea and, in fact, perhaps LLVM+Boehm would be the 
easiest way to create a new functional language implementation that captures 
F#'s productivity.

Dr Jon D Harrop, Flying Frog Consultancy Ltd.