Date: Sun, 10 Oct 1999 21:48:41 +0100
Message-Id: <24949.199910102048@buckie>
From: William Chesters <williamc@dai.ed.ac.uk>
To: caml-list@inria.fr
Subject: Re: speed versus C
In-Reply-To: <99101018244300.30629@ice>
<99101018244300.30629@ice>
Gerd Stolpmann writes:
> If you wanted to have a fully general substitute of closures in C (or
> assembler), you could do it as follows: For every function store a function
> pointer and an array of implicit parameters, e.g.
I'm not sure we are really connecting here. The fragment I quoted
involved a table of functions which share "implicit parameters" (the
`file' struct)---i.e., a thinly disguised C++ object, implemented in
exactly the way cfront used to do it.
(I wish I hadn't mentioned objects at all. The simpler case of a
single function pointer associated with a single implicit parameter is
common in the APIs to numerical library routines.)
> In object-oriented languages there is another way of paraphrasing
> closures.
As I said, a closure is an object with only one method.
> >(Though I'd argue that's because it sticks to
> >abstractions that "ornament" the low-level computational model without
> >"obscuring" it :-) .)
>
> I think this is exactly the point where we have different opinions.
More like, we are understanding "the low-level model" to mean
different things. I am happy to consider a function pointer plus a
persistent data record to "really be" a closure---something which one
might not realise before one was exposed to FLs, so that they enrich
and clarify one's understanding of low-level programming---whereas you
perhaps aren't?
Give me a little credit and try to understand what I say charitably
:-). I don't know what your background is, and I don't know how much
patience you have with "impressionistic" ideas. But I did once study
formal semantics, domain theory and the deep way different
computational models relate to each other in some detail, so I am
perfectly well aware of what constitutes a tight argument in this
context.
My point was simply that nearly every* feature of ocaml, however
abstract in appearance, compiles directly, and compositionally, onto
an idiom which one might well use in C or even assembler---give or
take some amount of sugar. Looking at this fact one way round, I
observe that the reason ocaml is so fast is that it mostly* stays
within the framework of the traditional computer model; looking at it
from the other direction, I note that the constructs which ocaml maps
onto the various different C idioms illuminate the "deeper meaning" of
the latter in terms of a much more abstract semantics.
* apart from GC and the ocaml classes (of which I must admit I am
slightly suspicious, because of the significant overhead in a method
call---you don't really want to use them in an inner loop)
Compare this with lazy languages, with which the whole discussion
started: they must necessarily use the traditional CPU in a pretty
contorted way to implement a basically foreign computational model.
(Graph reduction, or however you like to present it.) Compare it too
with SML/NJ, which supports continuations and therefore has to
allocate its stack frames on the heap---crazy, because continuations
aren't all that useful (corresponding most closely to a non-local
JMP), and noone seems to believe their protestations that this
implementation carries 0 performance penalty.
I contend that on the one hand stepping distinctly outside the
traditional model means slowness, and on the other that the
traditional model is not a bad one to think in, as long as your
understanding of it is enriched by experiencing and preferably using a
language like ocaml (and/or C++).
Anyway, thanks for the discussion!
Cheers,
William
This archive was generated by hypermail 2b29 : Sun Jan 02 2000 - 11:58:26 MET