Version française
Home     About     Download     Resources     Contact us    
Browse thread
Re: Why OCaml sucks
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Jon Harrop <jon@f...>
Subject: Re: [Caml-list] Re: Why OCaml sucks
On Friday 09 May 2008 05:45:53 you wrote:
> On Thu, May 8, 2008 at 5:39 PM, Jon Harrop <jon@ffconsultancy.com> wrote:
> > 1. Lack of Parallelism: Yes, this is already a complete show stopper.
> > Exploiting multicores requires a scalable concurrent GC and message
> > passing (like JoCaml) is not a substitute. Unfortunately, this is now
> > true of all functional languages available for Linux, which is why we
> > have now migrated entirely to Windows and F#. I find it particularly
> > ironic that the Haskell community keep hyping the multicore capabilities
> > of pure code when the rudimentary GC in Haskell's only usable
> > implementation already stopped scaling.
>
> Fork?  For something like a raytracer, I do not see how threads would be
> any more useful than fork.

There are two problems with that:

. You go back to manual memory management between parallel threads/processes.

. Parallelism is for performance and performance requires mutable data 
structures.

Then you almost always end up copying data unnecessarily because you cannot 
collect it otherwise, which increases memory consumption and massively 
degrades performance that, in turn, completely undermines the original point 
of parallelism. The cost of interthread communication is then so high in 
OCaml that you will rarely be able to obtain any performance improvement for 
the number of cores desktop machines are going to see over the next ten 
years, by which time OCaml will be 10-100x slower than the competition.

> When was the last time you heard of a cool new windows app anyway?

The last time we released a product. :-)

> > . No 16Mb limit.
>
> What do you mean by 16mb limit?

OCaml's strings and arrays are limited to 16Mb in 32-bit.

> > . Inlining.
>
> isn't it best for the compiler to handle that?  I wouldn't mind hearing
> another perspective on this, but I thought that compilers were smarter
> these days.

Definitely not. Compilers uniformly suck at inlining. For example, agressive 
inlining is often beneficial in numerical code and often damaging in symbolic 
code. Compilers cannot tell the difference.

This is very similar to "unboxed data structures are always better", which 
also isn't generally true.

I've got more gripes to add:

. Missing types, like float32 and int16.
. DLLs.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?e