Version française
Home     About     Download     Resources     Contact us    
Browse thread
RE: callcc/cps-style programming
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Chet Murthy <chet@w...>
Subject: Re: callcc/cps-style programming

Dave,

I forget the reference -- I myself only saw it on the Web a few years
back, but I'm sure that if you find Ousterhout's publications, it
should be there.  In addition, it was (one of) the cause(s) of
Stallman's Great Anti-TCL Rant.  Not that I'm a fan of TCL.  Just
that, well, I have to give Ousterhout his props.  I give Stallman his
props, too, of course.

I do understand and agree that, often lightweight threads make certain
types of I/O-intensive applications easier to write.  But the come at
a considerable cost, which is in two forms -- performance and code
quality.

  (a) context-switching becomes expensive -- register-set swaps are
  required

  (b) programmers simply do not know how to write multi-threaded code

I've seen both of these problems in the real world.

It is well-understood at this point that unless you have
compute-intensive tasks, it is _always_ more efficient to use
user-level threads than it is to use kernel threads.  In addition, one
finds that most of the old-line operating systems did explicit
continuation-style suspension.  I.e., events, not kernel threads.
What all this means, is that the more explicitly you can get a handle
on the state that you need for your continuing computation, the more
efficient you can make that suspend/resume.  And that's important in a
server environment.

[My daily work for the past 6 years has been making high-volume
web-application servers work under extreme load conditions for large
enterprise customers.  CPU budgets matter as much as they ever did in
the '70's, if not more.  No reasonable number of extra CPUs from
CompUSA will catch up to a Super Bowl ad.]

Likewise, I've seen enough different examples, both from within the
JDK and from outside of it, of multi-threaded code causing horrors, to
believe that programmers simply don't know how to write it.

Stuff like:

  (i) allocating memory in the middle of a finalizer -- e.g., a
  finalizer which is pushing some reusable resource onto some stack
  (and the stack needs to be grown)

  (ii) locking some object in a finalizer, which can also be locked by
  some thread that which would allocate memory with that lock held.

  (iii) SMP-unsafe code galore -- race conditions, memory-incoherence,
  thrash-prone code.

This is but a short, short list.  What really bothers me about
threading, is that, well, the only good cases where threading is
palatable for vanilla programmers, are those where the threads do not
interact.  And, yeah, sure, that's nice.  But that's a pretty small
subset of the important applications in this world.

Even in transaction-processing, where all transactions are supposed to
be "isolated", that's just the _model_.  Of _course_ transactions
interact, in the way they access shared resources, pooled connections,
caches of precomputed (but slowly invalidated) data, etc.

Conclusion:

I'm not saying that threads are bad.  Rather, what I'm saying is,
there is little need, in a language like CAML, for _user-level_
threads.  Kernel threads, you need, for talking to DB2 or Oracle.
Period.  But user-level threads, implemented in the language, don't
give you much value.  Much better to do some sort of explicit (perhaps
semi-automated) CPS-conversion, and use an event-dispatcher.

Amongst other things, at least, you'll never have to worry about being
time-sliced where you didn't expect it.

--chet--