English version
Accueil     À propos     Téléchargement     Ressources     Contactez-nous    

Ce site est rarement mis à jour. Pour les informations les plus récentes, rendez-vous sur le nouveau site OCaml à l'adresse ocaml.org.

Browse thread
Re: Why OCaml sucks
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2008-05-09 (20:39)
From: Gerd Stolpmann <info@g...>
Subject: Re: [Caml-list] Re: Why OCaml rocks

Am Freitag, den 09.05.2008, 19:10 +0100 schrieb Jon Harrop:
> On Friday 09 May 2008 12:12:00 Gerd Stolpmann wrote:
> > I think the parallelism capabilities are already excellent. We have been
> > able to implement the application backend of Wink's people search in
> > O'Caml, and it is of course a highly parallel system of programs. This
> > is not the same class raytracers or desktop parallelism fall into - this
> > is highly professional supercomputing. I'm talking about a cluster of
> > ~20 computers with something like 60 CPUs.
> >
> > Of course, we did not use multithreading very much. We are relying on
> > multi-processing (both "fork"ed style and separately started programs),
> > and multiplexing (i.e. application-driven micro-threading). I especially
> > like the latter: Doing multiplexing in O'Caml is fun, and a substitute
> > for most applications of multithreading. For example, you want to query
> > multiple remote servers in parallel: Very easy with multiplexing,
> > whereas the multithreaded counterpart would quickly run into scalability
> > problems (threads are heavy-weight, and need a lot of resources).
> If OCaml is good for concurrency on distributed systems that is great but it 
> is completely different to CPU-bound parallelism on multicores.

You sound like somebody who tries to sell hardware :-)

Well, our algorithms are quite easy to parallelize. I don't see a
difference in whether they are CPU-bound or disk-bound - we also have
lots of CPU-bound stuff, and the parallelization strategies are the

The important thing is whether the algorithm can be formulated in a way
so that state mutations are rare, or can at least be done in a
"cache-friendly" way. Such algorithms exist for a lot of problems. I
don't know which problems you want to solve, but it sounds like as if it
were special problems. Like for most industries, most of our problems
are simply "do the same for N objects" where N is very large, and
sometimes "sort data", also for large N.

> > In our case, the mutable data structures that count are on disk.
> > Everything else is only temporary state.
> Exactly. That is a completely different kettle of fish to writing high 
> performance numerical codes for scientific computing.

I don't understand. Relying on disk for sharing state is a big problem
for us, but unavoidable. Disk is slow memory with a very special timing.
Experience shows that even accessing state over the network is cheaper
than over disk. Often, we end up designing our algorithms around the
disk access characteristics. Compared to that the access to RAM-backed
state over network is fast and easy.

> > I admit that it is a challenge to structure programs in a way such that
> > parallel programs not sharing memory profit from mutable state. Note
> > that it is also a challenge to debug locks in a multithreaded program so
> > that they run 24/7. Parallelism is not easy after all.
> Parallelism is easy in F#.

Wonders must have happened I'm not aware of. How does F# prevent

> > This is a quite theoretical statement. We will rather see that most
> > application programmers will not learn parallelism at all, and that
> > consumers will start question the sense of multicores, and the chip
> > industry will search for alternatives.
> On the contrary, that is not a theoretical statement at all: it already 
> happened. F# already makes it much easier to write high performance parallel 
> algorithms and its concurrent GC is the crux of that capability.

Don't misunderstand me, I'm not anti-F#. I only have no interests right
now in taking advantage of multicores by concurrent GC's. I rather want
to have an ultra-fast single-core execution. I can do the
parallelization myself.


Gerd Stolpmann * Viktoriastr. 45 * 64293 Darmstadt * Germany 
gerd@gerd-stolpmann.de          http://www.gerd-stolpmann.de
Phone: +49-6151-153855                  Fax: +49-6151-997714