Re: Interpreter vs hardware threads

From: Xavier Leroy (xleroy@pauillac.inria.fr)
Date: Wed Mar 08 2000 - 21:02:31 MET

  • Next message: Xavier Leroy: "Re: Interpreter vs hardware threads"

    > | Very lightweight
    > | threads do exist, see e.g. the call/cc-based threads of SML/NJ, but
    > | entail significant performance penalties not only on I/O, but also on
    > | the actual running speed of the sequential code.
    > Is that really so, Xavier? What percentage performance penalty do
    > you think is involved? 1%? 10%? Factor of 2?

    For SML/NJ, I think their stackless execution model (all activation
    records beging heap-allocated) entail a significant speed penalty for
    sequential code -- at least 20%, I'd say. But it sure gives you
    blazingly fast thread creation.

    The Concurrent Haskell approach that you describe is somehow less
    lightweight than the SML/NJ approach. In particular, thread creation
    is going to be more expensive because of the cost of allocating and
    setting up a new stack. Also, the memory consumption of each thread
    is going to be higher. I guess the SML/NJ approach could accommodate
    one million threads, while Concurrent Haskell sounds more in the 100 000s.
    Still, I agree the solution you outline is pretty lightweight.

    The expression "lightweight threads" is getting too vague. I guess we
    need several degrees of lightweightness, from SML/NJ's "ultra-lightweight
    threads" to POSIX's "not so lightweight threads", with Haskell's
    "pretty lightweight" threads and Caml's "lightweight, but no more"
    threads...

    > For potentially-blocking I/O operations it is true that there is
    > some extra work to do, much like a context switch. The pointers need
    > to be saved in a safe place in case GC strikes, and a global lock should
    > be released so that a new heavyweight thread can take over the business
    > of running the lightweight threads if the I/O blocks. But none of
    > this seems really expensive in terms of % of computation time, does it?

    You have to add to this the overhead on the I/O operation itself. If
    your threads build upon heavyweight threads for I/O, that should be
    negligible. (Say, one or two extra kernel context switches.) But if
    you have to perform I/O polling via select() (like OCaml bytecode
    threads do for maximal portability), the overhead becomes significant.
    (select() is one of the most expensive Unix system calls, in
    particular because the kernel has to scan a huge set of file
    descriptors just to determine which ones to wait upon; it's so bad
    that Unix 98 introduced an alternate form, poll(), that does exactly
    the same thing but with a more compact description of the f.d. sets.)

    - Xavier Leroy



    This archive was generated by hypermail 2b29 : Fri Mar 10 2000 - 09:05:55 MET