Version française
Home     About     Download     Resources     Contact us    

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
RE: [Caml-list] laziness
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2004-09-06 (12:59)
From: Jon Harrop <jon@j...>
Subject: Re: [Caml-list] laziness
On Monday 06 September 2004 01:57, Richard Jones wrote:
> One thing that worries me about laziness.
> Doesn't laziness often indicate a bug in the code?

No. Hence some languages (like Haskell) are completely lazy. Some people 
advocate the use of approaches which are lazy by default (like Haskell's), 
others prefer to use laziness explicitly, only when they feel it is 
necessary. The principle problem with lots of laziness is the difficulty in 
predicting memory usage, which is very implementation dependent.

> ie.  You've 
> written an expression in the program, but that expression is never
> used.  This is dead code, right?  Hence a bug?

Laziness is used when there is code which *might* need executing, not when an 
expression is never going to be executed.

Examples include recurrence relations, when the code will be executed until 
the base case is reached. This is true for lazy lists. Another example is 
propagators (where further computations may be performed, up to the 
discretion of a predicate).

[lazy list]
> ...
> But in a sense this contains dead code too.  You've written down your
> desire to construct all the squares, and then later you change your
> mind and take only the first 10.

You haven't written down a "desire to construct all squares" any more than 
writing the equivalent recurrence relation does:

x_0 = 0
x_(n+1) = 1 + 2 x_n

You have simply defined how any square may be computed.

> In OCaml you'd write this correctly as something like:
> (fun x -> x * x) (range 1 10)

This assumes that you know the 10 in advance, it also loses out on 
memoization. What if you want to compute up to 10, 12 and 14?

> (Sorry if this appears like a troll, but actually I'm genuinely
> interested in whether laziness is useful, or indicates a bug).

Laziness can definitely be useful. Whilst writing a compiler, I recently noted 
that my computing and storing the set of types used in each expression and 
subexpression was not always used. By making the construction lazy, I avoided 
many unnecessary computations (but not all!, hence it isn't dead code) and 
the program now runs much faster.

From a theoretical standpoint, Okasaki pointed out that imperative programs 
are often faster because they choose to amortise the costs of a sequence of 
operations, grouping them into batches. In contrast, functional programs 
often don't amortise costs. This gives them better worst case behaviour but 
worse average case behaviour. The Map and Hashtbl data structures are fine 
examples of this. Okasaki suggested that the solution might be to use 
laziness in functional approaches to amortise the costs of operations, lazily 
storing the necessary changes and then forcing the evaluation of all 
outstanding changes only when necessary.

So laziness can be used as an optimisation.


To unsubscribe, mail Archives:
Bug reports: FAQ:
Beginner's list: