Version française
Home     About     Download     Resources     Contact us    
Browse thread
Odd performance result with HLVM
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Kuba Ober <ober.14@o...>
Subject: Re: [Caml-list] Odd performance result with HLVM

> Jon Harrop a écrit :
>> There are really two major advantages over the current ocamlopt  
>> design and both stem from the use of JIT compilation:
>>
>> . Run-time types allow per-type functions like generic pretty  
>> printers and comparison.
>>
>> . Monomorphisation during JIT compilation completely removes the  
>> performance cost of polymorphism, e.g. floats, tuples and records  
>> are never boxed.
>
> Do you mean that each polymorphic function is compiled into a  
> different
> native piece of code each time it is called with different parameter
> types? How does the JIT'ed code size compare to ocamlopt'ed code size?

Having done it, although not in a JIT but in your plain-old whole- 
project compiler,
for my use cases the code size actually shrinks. The functions usually  
end up inlined
and sometimes reduce to a few machine instructions. Most of the  
runtime library is written
using polymorphic functions. Case in point: all sorts of string- 
processing functions which
can take as arguments either strings stored in RAM or stored in ROM,  
and those data types
are very much orthogonal on my platform. An invocation of a tail- 
recursive
"strlen" reduces to about as many bytes of code than it'd take to push  
the arguments on
the stack and call a non-polymorphic version of itself.

That's how I initially got a statically typed LISP to compile for  
"tiny" 8 bit microcontrollers
without using all of the whopping 1kb of RAM and 16kb of program flash  
on a Z8F162x
device.

Right now I'm hacking away to get rid of last traces of LISPiness and  
to get the project fully
working in OCaml, using ML-like syntax for user code. I like it much  
better than LISP's.

I have also found that by doing whole-project compilation with  
aggressive constant propagation
and compile-time execution of functions that depend only on known  
constants, I could get
rid of about 85% of LISP macros in my code. The other macros ended up  
being rewritten
to just invoke ct_eval: string -> function, which is a compile-time  
eval function.
It's just like LISP macros, but since in ML family code isn't data, it  
was easier to just
generate strings and feed them into compiler, rather than expose all  
of the AST machinery
to "userland".

Cheers, Kuba