English version
Accueil propos Tlchargement Ressources Contactez-nous

Ce site est rarement mis jour. Pour les informations les plus rcentes, rendez-vous sur le nouveau site OCaml l'adresse ocaml.org.

Browse thread
Odd performance result with HLVM
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2009-03-04 (16:17)
From: Mikkel_Fahnøe_Jørgensen <mikkel@d...>
Subject: Re: [Caml-list] Odd performance result with HLVM
When looking at the benchmark game and other benchmarks I have seen, I
noticed that Haskell is almost as fast as OCaml and sometimes faster.
Some Lisp implementations are also pretty fast.

However, when you look at memory consumption OCaml uses considerably
less memory, except for languages in the C family.

I suspect that many real world performance scenarios, such as heavily
loaded web servers and complex simulations, depend very much on memory
consumption. This is both because of GC overhead and because of the
slower memory pipeline the more cache levels are involved.

So in case of a new JIT solution for OCaml, I believe it is important
to observe this aspect as well.


2009/3/2 Kuba Ober <ober.14@osu.edu>:
>> Jon Harrop a écrit :
>>> There are really two major advantages over the current ocamlopt design
>>> and both stem from the use of JIT compilation:
>>> . Run-time types allow per-type functions like generic pretty printers
>>> and comparison.
>>> . Monomorphisation during JIT compilation completely removes the
>>> performance cost of polymorphism, e.g. floats, tuples and records are never
>>> boxed.
>> Do you mean that each polymorphic function is compiled into a different
>> native piece of code each time it is called with different parameter
>> types? How does the JIT'ed code size compare to ocamlopt'ed code size?
> Having done it, although not in a JIT but in your plain-old whole-project
> compiler,
> for my use cases the code size actually shrinks. The functions usually end
> up inlined
> and sometimes reduce to a few machine instructions. Most of the runtime
> library is written
> using polymorphic functions. Case in point: all sorts of string-processing
> functions which
> can take as arguments either strings stored in RAM or stored in ROM, and
> those data types
> are very much orthogonal on my platform. An invocation of a tail-recursive
> "strlen" reduces to about as many bytes of code than it'd take to push the
> arguments on
> the stack and call a non-polymorphic version of itself.
> That's how I initially got a statically typed LISP to compile for "tiny" 8
> bit microcontrollers
> without using all of the whopping 1kb of RAM and 16kb of program flash on a
> Z8F162x
> device.
> Right now I'm hacking away to get rid of last traces of LISPiness and to get
> the project fully
> working in OCaml, using ML-like syntax for user code. I like it much better
> than LISP's.
> I have also found that by doing whole-project compilation with aggressive
> constant propagation
> and compile-time execution of functions that depend only on known constants,
> I could get
> rid of about 85% of LISP macros in my code. The other macros ended up being
> rewritten
> to just invoke ct_eval: string -> function, which is a compile-time eval
> function.
> It's just like LISP macros, but since in ML family code isn't data, it was
> easier to just
> generate strings and feed them into compiler, rather than expose all of the
> AST machinery
> to "userland".
> Cheers, Kuba
> _______________________________________________
> Caml-list mailing list. Subscription management:
> http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list
> Archives: http://caml.inria.fr
> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
> Bug reports: http://caml.inria.fr/bin/caml-bugs