Version française
Home     About     Download     Resources     Contact us    

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
Multiplication of matrix in C and OCaml
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2007-02-10 (16:18)
From: Jon Harrop <jon@f...>
Subject: Re: [Caml-list] Multiplication of matrix in C and OCaml
On Saturday 10 February 2007 15:51, 
> Damn. I should have taken '+'.

Even for modulo integer arithmetic that still breaks:

  a+b > a+c

when a+c wraps around max_int and becomes negative but a+b does not, then 
a+b>a+c even though c>b.

> Is the compiler allowed to make optimizations according to known
> mathematical laws? 

Provided they are the correct laws, yes.

> I've been wondering about similar optimizations for floats, but didn't
> get my examples right. The transformtion done by the compiler would of
> course be forbidden to increase the error.

Yes, which is actually very restrictive. That is why -ffast-math can make 
C/C++ programs run so much more quickly. In OCaml, -ffast-math has a 
different purpose (to enable the emission of trig instructions, IIRC) and it 
exists on x86 and not AMD64 whereas the same flag exists on both platforms 
for GCC.

Incidentally, why is -ffast-math not always enabled on x86 and why isn't it a 
no-op on AMD64 (rather than causing an "unknown option" error as it does 

> Still: With a certain Gcc version and flags combination the OP saw a
> threefold improvement in performance. That in intself is suspicious (I
> don't think that this much optimization potential was left in Gcc ...)
> and I still would check for optimization errors in this case. Gcc is
> not bug free either, so one should test the correctness of the
> compiled program first and wether it really does the work it is
> supposed to do.

Although Frederic is an expert OCaml programmer, it sounds like his C was a 
little off the mark (having undefined behaviour and allowing arbitrary 
optimisations as a consequence).

Benchmarking is a real can of worms. I've spent a long time trying to 
benchmark programs and languages and I really think the only feasible way 
forward is to measure the real time taken to solve real problems.

Even this quickly becomes apples and oranges. For example, F# has a concurrent 
GC and I have a dual core machine, so F# has the advantage that even single 
threaded programs can exploit both cores when they are allocation intensive.

So how am I supposed to compare the performance of the two languages? There is 
no panacea. In the end I must measure the time taken to perform an important 
real world task, like spinning a 3D bunny around. Is spinning a bunny serious 
enough though? If not, maybe I should use a Mandelbrot renderer:

Dr Jon D Harrop, Flying Frog Consultancy Ltd.
OCaml for Scientists