English version
Accueil     À propos     Téléchargement     Ressources     Contactez-nous    

Ce site est rarement mis à jour. Pour les informations les plus récentes, rendez-vous sur le nouveau site OCaml à l'adresse ocaml.org.

Browse thread
Generators/iterators and lazy evaluation?
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2007-04-04 (22:55)
From: Bill Wood <william.wood3@c...>
Subject: Re: [Caml-list] Generators/iterators and lazy evaluation?
On Thu, 2007-04-05 at 07:16 +1000, Erik de Castro Lopo wrote:
> Alain Frisch wrote:
> > I think you cannot implement this notion of iterator without some kind
> > of control flow operator (at runtime) or some code rearrangement (at
> > compile time). The point is that you must mainting several active
> > "threads" of computations, including their own stack.
> Alain, I think you may have misunderstood Python's yeild operator.
> Typically the original poster's pow2 generators would be used like:
>     for p = pow2 (10):
>         print p
> but I really don't see how laziness Python's generators actually
> provide any benefit for this problem over the more obvious Python 
> solution to this which is:
>     for i = range (10):
>         p = 2 ^ i
>         print p

I think the motivation was to treat the memory issues with the "range"
function, which is most often used to implement "for" loops (range(N)
returns the list [0, ..., N-1]).  People complained about a control
structure allocating memory linear in the depth of the loop, so "xrange"
was created; it generates the items of the list one at a time rather
than all at once.  Further releases of Python generalized to iterators
and generators.  It does make a difference too.  Somebody wrote a Sieve
of Eratosthenes using Python's high-level list operators; it blew up on
fairly small arguments.  I wrote a straight-forward Sieve using indexing
into a static array, with reasonable time performance.  I then rewrote
it using an iterator and got analogous time performance (roughly a
factor of 2 penalty), without statically allocating the entire sieve