This is our submission to the ICFP 2000 programming contest. The team was composed of seven members, all from INRIA Rocquencourt:
We used Objective Caml version 3.00 (of course) and implemented all the features proposed in the challenge task (tiers 1, 2 and 3).
The most notable optimization is that we maintain bounding spheres associated with each geometric object (basic or compound) and we test first that a ray intersects the bounding sphere before computing the exact intersection. This is quite effective on the tests, eliminating about 75% of the exact intersection tests. The program spends more than 50% of its running time checking for intersections with the bounding spheres, which is a fairly simple hand-optimized geometric formula for which the OCaml compiler generates optimal Pentium Pro code.
Also, the top-level unions composing the scene are rearranged as a binary space partitioning tree before computing the bounding spheres, in an attempt to balance the tree of unions and minimize the radii of the bounding spheres. This doesn't win all the time, especially for fractal-like scenes that are already very well grouped, so a heuristic picks the "most balanced" scene between the original one and the regrouped one.
Finally, we keep track of the total light attenuation along reflected rays and cut the recursion when the contribution becomes negligible. This allows rendering to fairly important depths in mirror-rich scenes in a reasonable time.
The language part of the challenge is implemented as a trivial interpreter, following very closely the inference rules. We hash-cons identifiers for faster environment lookup. We detect surface functions that are constant or constant by face and avoid interpreting them at every point. With this optimization, the time spent in the interpreter is negligible and we didn't bother with further optimizations.
For a long time, the images produced by our program were significantly brighter than those on the "examples" Web page. We weren't too worried about this, and indeed once the judges debugged the images on the "examples" page, they looked a lot more like our results. Minor differences in colors remain, but we attribute them to rounding errors.
The most painful part of the challenge was tracking changes in the 18 versions of the 23-page spec. Change bars would have been appreciated. The next most painful part was getting the cones right.
So, we got second prize. Not bad, especially since the team that won the first prize also used OCaml and was headed by a former INRIA PhD student and major contributor to the OCaml system (Jérôme Vouillon). A good year for OCaml, obviously.
The first prize entry was much faster than ours on some of the test images. We do not know yet what caused this speed difference. Their interpreter was faster than ours, in particular due to a better handling of environments (using positions rather than variable names to index in the environment), but this does not seem to account for all of the speed difference. We will see when the judges release their test files and the winning team releases its source.
Our entry fell into a cunning trap carefully laid out by the judges. From what John Reppy told us, one of their tests contained one geometric object whose associated surface function does not terminate (it loops on a bottomless recursion), but this object is not included in the scene to be rendered. So, a ``normal'' implementation should never evaluate the looping surface function, and avoid non-termination. However, our implementation runs every surface function once when the geometric object is built, in order to determine whether the surface function is constant or not. Thus, on this test, our program looped and crashed on a stack overflow. This may have influenced quite negatively our scores. Strictly speaking, the judges are right (just like Mr Homais is, strictly speaking, always right in Madame Bovary): our optimization for constant surface function does not preserve the semantics of the input GML programs. Still, they were a bit sneaky on this one...
We promised more details if we win a prize, so it's time to do it.
There is not much to add to the above concerning the program itself. We did not implement anything particularly clever or original, and there is no doubt that ray tracing experts could have done much better. Fortunately for us, the challenge task was well balanced and also contained a significant ``language'' part, which kept away all those graphics expert writing in C... (In this sense, the 2000 contest was rather different from the 1999 contest: the 1999 one called for compiler and theorem proving experts, and was very much an open research issue, while the 2000 contest was more balanced in expertise and covered a well-investigated area). To do reasonably well on both aspects of the contest (language and graphics), one needed a sufficiently high-level language with garbage collection and decent floating-point performance, and OCaml is an excellent match for these requirements.
As for the ``unlimited bragging rights'', we've already bragged a lot about the virtues of OCaml in the past, and there is not much to add. This year, the judges said that OCaml is both
the superior programming tool of choice for discriminating hackersand
a fine programming tool for many applicationsso what else could we add? (Greg Morrisett made some funny faces when saying this, but resistance is futile, he has been assimilated already.)
What we would like to talk about here is our development process. First, there is no denying that we had a big team (7 members). There are many reasons for this. There were a lot of talented INRIA graduate students hanging around at the time of the contest, and it is well known that graduate students would do anything rather than work on their PhD topics. (To be fair, it is also well known that INRIA full-time researchers quite often would do anything rather than work on their research.) Another reason for such a big team is that there was no domain expert among us. Each of us felt a bit lost with the topic and was comforted by having others to check what we were doing. In a sense, we kept each other warm.
Having seven members in the team does not mean all seven wrote the code. Indeed, that would be hard to manage. Instead, we quickly split into two groups: one group (Cuoq, Doligez, Harley, Leroy) concentrated on getting a basic implementation to work correctly, while the second group (Ailleret, Le Fessant, Schmitt) worked on algorithmic optimizations and extra features. In each group, at any time only one person was actually writing code; the others assumed various supporting roles, such as discussing the algorithms and code structure, working out the maths on the blackboard, reviewing the code as it was written, and developing test cases. This is in essence the ``surgical team'' model from Brooks' The mythical man-month.
During the last 12 hours, the code was nearly finished, so the teams reconfigured to allow more parallel work. Some of us (Leroy, Le Fessant, Harley) did profiling, micro-optimizations and staring at the assembly code produced by ocamlopt. Others (Cuoq, Doligez, Harley, Schmitt) developed nice examples in GML.
Here is an approximate timeline for our development: