Version franÁaise
Home ††† About ††† Download ††† Resources ††† Contact us †††

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
Looking for information regarding use of OCaml in scientific computing and simulation
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2009-12-22 (04:40)
From: Linas Vepstas <linasvepstas@g...>
Subject: Re: [Caml-list] Looking for information regarding use of OCaml in scientific computing and simulation
Hi Lukasz,

Yikes!  Care to start an argument on my behalf?

2009/12/20 Lukasz Stafiniak <>:
> ---------- Forwarded message ----------
> From: Dario Teixeira <>
> Date: Sun, Dec 20, 2009 at 3:27 PM
> Subject: Re: [Caml-list] Re: OCaml is  broken
> To: Erik Rigtorp <>
> Cc: caml-list <>
> Hi,
>> It's too bad that INRIA is not interested in fixing this bug. No
>> matter what people say I consider this a bug. Two cores is standard by
>> now, I'm used to 8, next year 32 and so on. OCaml will only become
>> more and more irrelevant. I hate to see that happening.

Hear, hear!

> This is a perennial topic in this list.  Without meaning to dwell too
> long on old arguments, I simply ask you to consider the following:
> - Do you really think a concurrent GC with shared memory will scale neatly
>  to those 32 cores?

Time to start funding GC research?  Is concurrent GC really
that bad?

> - Will memory access remain homogeneous for all cores as soon as we get into
>  the dozens of cores?

Yes, NUMA (non-uniform memory access) is well-known to be
painful to optimize for, due to the difficulty of predicting locality
of reference.  But is tuning for NUMA harder than creating/using
message-passing code?  Not by a long-shot.

Anyway, CPU designers understand that NUMA is unpopular
with software types, and is trying hard to to make mem access
as homogenous as possible.   Look at 'blue waters' for an
extreme example.

> - Have you considered that many Ocaml users prefer a GC that offers maximum
>  single core performance, because their application is parallelised via
>  multiple processes communicating via message passing?

Have you ever tried writing a significant or complex algo using
message passing?  Its fun if you have nothing better to to --
its a good intellectual challenge.  You can even learn some
interesting computer science while you do it.

However, if you are  interested in merely using the system
to do your "real" work, then writing message-passing code
is an utter waste of time -- its difficult, time-consuming, error
prone, hard to balance and optimize & tune, works well only
for "embarrasingly parallel" code, etc.  Even the evil
slow-down of NUMA is often better than trying to
performance-tune a message-passing system.

Let me put it this way: suggesting that programmers can
write their own message-passing system is kind of like
telling them that they can write their own garbage-collection
system, or design their own closures, or they can go
create their own type system. Of course they can ... and
if they wanted to do that, they would be programming in
C or assembly, and would probably be designing new
languages.  Cause by the time you get done with message
passing, you've created a significant and rich programming
system that resembles a poorly-designed language... been
there, done that.

> In this context,
>  your "bug" is actually a "feature".

Why not give people the choice?


disclaimer: I don't (currently) use caml, -- this is an outsiders