English version
Accueil     À propos     Téléchargement     Ressources     Contactez-nous    

Ce site est rarement mis à jour. Pour les informations les plus récentes, rendez-vous sur le nouveau site OCaml à l'adresse ocaml.org.

Browse thread
Comparison of OCaml and MLton for numerics
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2007-06-01 (01:23)
From: skaller <skaller@u...>
Subject: Re: [Caml-list] Comparison of OCaml and MLton for numerics
On Thu, 2007-05-31 at 23:30 +0200, Alain Frisch wrote:
> Jon Harrop wrote:
> >> My point, however, is that MLton and OCaml are being fed the same
> >> code, and if OCaml performs specializing and proper inlining, it will  
> >> get almost twice its current performance.
> > 
> > The OCaml compilers are designed to handle good code.
> Could you elaborate? Do you mean that a code than would benefit from
> inlining is not a good code?

A general comment may explain this: some systems specifically
provide performance which is readily computable. For example
in the design of STL all the functions provided are fast
with specified O() performance. Slower functions like
'List.nth'  are not provided because the speed of a program
is not evident in the syntax.

So what I believe Jon and Xavier mean here is that the
Ocaml compilers compile code down to stuff which is easily
predicted in terms of the input syntax. no magic like
invariant code motion: What You See is What You Get.

The idea is that this gives the programmer *control* over
performance. It may require more work, but the lack of
'magic' which can defeat manual optimisation attempts is seen
as a virtue.

Basically the code is seen as that: an encoding of an algorithm.
If you want it to run faster, change your encoding.

The opposite approach -- to add as much smarts to the optimiser
as possible -- can generate much better code in many circumstances,
but it requires much more knowledge of complex internals by the
programmer to change the generated encoding where the magic didn't
work so well -- and in turn this puts pressure on the compiler vendor
to improve the 'smartness' of their optimisation heuristics ..
simply because on one else has the expertise to do so.

Someone (as usual no URL sorry) wrote a paper roughly titled
'guaranteed optimisations' which is actually an interesting
perspective on this whole scenario.

The fact is, no programmer can possible handle the complex
recoding an automatic algorithm can, so there is always going
to be a tension between 'do it yourself' and 'automagical'
optimisation strategies. 

Ocaml seems to pick a good mix. CF: dypgen GLR parser,
old version: 95++% of all compile time. New version with
recoding of data structures etc is down to about 20--%
of compile time .. it's over an order of magnitude faster.

IMHO: whilst quite a lot is known about how to optimise
executable code .. almost nothing is understood about how
to optimise data structures (automatically I mean).

John Skaller <skaller at users dot sf dot net>
Felix, successor to C++: http://felix.sf.net