Version française
Home     About     Download     Resources     Contact us    
Browse thread
Help me find this pdf
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Andreas Rossberg <rossberg@m...>
Subject: Re: [Caml-list] Re: Help me find this pdf
"Robert Fischer" <robert@fischerventure.com> wrote:
>
>> 1. Purity and evaluation regime are separate issues. You can very well 
>> have a pure language that is eager.
> The way I understand it, the omnipresent laziness of Haskell is at least 
> partly justified because it is a way of encouraging being functionally 
> pure.  Is that wrong?

I guess you are alluding to SPJ's famous quote that "one advantage of 
laziness is that it keeps the language designer honest". But that wasn't its 
motivation, rather a tongue in cheek observation, made after more than a 
decade of language evolution.

>> 6. Nevertheless, evaluation is fully deterministic, thus it certainly 
>> cannot cause Heisenbugs, neither semantically nor performance-wise.
> If I put in a debug statement, it will like change the timing when 
> something gets forced, right?  This, in turn, can change all kinds of 
> time/space performance characteristics, and potentially even the code that 
> is executed (through interrupting deforestation).  So your debugging 
> statements might well change the very characteristic of the program that 
> you're defining as a bug.

In an (otherwise pure) language, (even impure) debug output cannot hide a 
bug that is present without it. You are right that it can change the 
performance characteristics of a program by inducing additional strictness. 
But debug outputs are rather useless for performance tuning anyway, for 
which you would use more high-level tools.

"Jon Harrop" <jon@ffconsultancy.com> wrote:
>
>> 1. Purity and evaluation regime are separate issues. You can very
>> well have a pure language that is eager.
>
> IIRC, you then need the option of laziness to be able to implement all 
> programs with the same asymptotic complexity as impure/eager or pure/lazy.

Even with laziness you cannot always achieve the same complexity in a pure 
language, unless you have something like built-in state monads giving you 
constant-time update in a pure fashion - which is independent from laziness.

>> 2. However, in a pure language the details of evaluation order are
>> largely immaterial to its semantics, which obviously is an advantage.
>
> I'm not sure that unknown evaluation order is an "obvious" advantage in 
> general. For example, when evaluating "f && g" where f is O(1) and g is 
> O(n!) you might want to know that "f" gets evaluated first.

I didn't say it is an advantage if you don't know. It is an advantage if you 
don't *have to* know.

>> 3. Lazy evaluation by itself is as precise an evaluation scheme as
>> eager evaluation. There is no inherent vagueness.
>
> Does a thunk preventing a large data structure from being deallocated 
> count as "inherent vagueness"?

I don't think so. At least in principle, you always know when something is 
evaluated. In practice that can be hard to determine, of course, but that 
doesn't make it "vague".

The vagueness of garbage collection itself is a different issue that is not 
addressed in any major language I am aware of.

>> 5. The problem with Haskell and laziness on the other hand is not
>> semantic bugs, but the fact that it can make space complexity hard to
>> predict sometimes.
>
> And time performance hard or impossible to achieve.

Evaluating a given expression lazily can never take more steps than 
evaluating it eagerly. So in principle, achieving a certain time complexity 
is no harder than under eager evaluation (everything else being equal). But 
when you try to be smarter and actually exploit laziness you can certainly 
run into surprises.

>> 6. Nevertheless, evaluation is fully deterministic, thus it certainly
>> cannot cause Heisenbugs, neither semantically nor performance-wise.
>
> Perhaps I've missed something but surely evaluation order can alter 
> asymptotic complexity? If so, moving from one compiler to another can 
> change the asymptotic complexity of your program?

Well, that is a slightly different issue. Of course, certain high-level 
optimizations can cut down the complexity of programs. If you move to a 
compiler that does not perform them you may see a complexity increase. 
However, the naive unoptimized semantics will always be the upper bound.

On the other hand, this is simply an instance of a general problem you see 
with many languages: the more aggressive your compilers are the more careful 
you have to be with assumptions regarding performance that you rely on for 
coding.

- Andreas