Hello,
Has anybody done benchmarks to eval the cost of lazy computation
encapsulation, in terms of time, memory, garbage collection? I have no
idea of how this is implemented...
Here's my personal case :
There is a function f which I want to compute for several arguments
x_1,...x_n.
let f x =
[beginning]
let intermediate_value = ... in
[end]
only that I want to compute it thoroughly just for the x_i that has the
highest intermediate_value among x_i's. This intermediate value is used
anyway for the [end] part.
Naive design (design a):
let f x =
[beginning]
let intermediate_value = ... in
(intermediate_value, lazy [end])
the lazy computation of the [end] being forced only for the x_i that has
the highest intermediate_value
Another possible design (less elegant) (design b):
let intermediate_value x =
[beginning]
let intermediate_value = ... in
intermediate_value
let f x =
[beginning]
[end]
I compute the intermediate value and then recompute all over again
for the x I want to compute.
Comparison of time costs:
if
B is the cost for [beginning]
E is the cost for [end]
L is the cost for encapsulating the lazy computation of [end]
then
design a costs n*(B+L) + E
design b costs (n+1)*B + E
(very roughly I suppose)
Clearly, deciding which design to adopt is a trade-off depending on n, B,
L. I suppose L also depends on the number of results from [beginning] that
the computer will need to "remember" for [end]? Also, encapsulating lazy
computations means more memory allocation, means more garbage collecting,
doesn't it?
In my case the efficiency bottleneck is E not B, and n is about 10 (i.e.
high) so I'm not expecting a wonderful overall time gain. I'm just
wondering if it's costly to implement it in the way that corresponds best
to reality (design a). "B" is only a dozen flops.
Could anybody give me a hint about the order of magnitude of L?
Thanks very much in advance for your answers.
Benoît de Boursetty.
This archive was generated by hypermail 2b29 : Mon Feb 21 2000 - 18:16:15 MET