English version
Accueil     À propos     Téléchargement     Ressources     Contactez-nous    

Ce site est rarement mis à jour. Pour les informations les plus récentes, rendez-vous sur le nouveau site OCaml à l'adresse ocaml.org.

Browse thread
Comparison of OCaml and MLton for numerics
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2007-06-01 (14:15)
From: Stephen Weeks <sweeks@s...>
Subject: Re: [Caml-list] Comparison of OCaml and MLton for numerics
> MLton's strength is that you don't have to pay the price for abstraction,
> i.e. cleaning up your program (by factorizing it or making it more modular)
> does not degrade performance. I have no experience with MLton, but I don't
> believe that performances are much more difficult to predict than with OCaml
> (Stephen?).

Performance is much more difficult to predict with MLton than with OCaml.  Even
worse, with whole-program optimization a small change in one part of the program
can affect performance in another part.  That having been said, I would gladly
take that drawback in exchange for the benefits of whole-program optimization.
Eliminating the price of abstraction causes a change in mindset that helps one
to avoid the mistake of premature optimization and to worry more about
correctness and getting the code done.  Also, although the performance of MLton
is less predictable, it's not like the generated code is sometimes twice as fast
as it would be with separate compilation and sometimes twice as slow.  In
reality, it's always as fast, and often, when the optimizations kick in, it's
significantly faster (2, 3, 5 times).  BTW, I'm not specifically comparing OCaml
and MLton here -- I'm making the general observation that whole-program
optimization adds optimization opportunities.

Defunctorization is one example of the many whole-program optimizations in MLton
that allow more abstract code without penalties.  To answer a couple other
questions on the list -- defunctorization can indeed give significant
performance improvements, and after defunctorization, there are no performance
penalties left from modules (indeed, there are no modules left :-).  Also, the
performance penalty at Jane Street (and other places) from functors is real, and
causes people to rewrite code, manually duplicate code, etc.  And, it prevents
people from using functors in the first place, because they have a (conscious or
unconscious) understanding of the cost.

Unpredictable performance is a fact of life with computers.  It's not possible
to eliminate it, even with a sufficiently simple compilation approach.  There
are way too many other factors.  I think it's better to encourage people to
program in the cleanest way possible, and then to profile and improve their code
if necessary.  The same arguments that one can make for simple compilation
strategies leading to predictable performance could be used to argue that we
should all program in C or in assembly.