Version française
Home     About     Download     Resources     Contact us    
Browse thread
Type inference inside exceptions ?
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Diego Olivier FERNANDEZ PONS <diego.fernandez_pons@e...>
Subject: Re: [Caml-list] Reordering continuations (was :Type inference inside exceptions ?)

I am not sure I understood all of your comments but I can at least answer
to a few of them

[Francisco Valverde a écrit]
> It is also the technique to find alternative recognition hypotheses  
> in most speech recognizers (hypothesis graph search with dynamical  
> reordering & pruning of nodes & backtrack, in a nutshell).

Parsers and combinatorial optimization engines are equivalent in the
sense that for every combinatorial problem you can find a grammar and
a string which solutions are the solutions of the combinatorial
problem (think of Prolog) and reciprocally under some reasonable
hypothesis. This is true for anything that gets close to NP-completeness
but in this case both problems are really closely related.

Parsing Techniques explain how to compile a non-deterministic stack automaton
(e.g. an LR grammar with conflicts, an ambiguous grammar) to an
exception based implicit enumeration algorithm (i.e. a recursive  
ascent parser implemented with a set of recursive functions).

Parsing Techniques - A Practical Guide
Dick Grune and Ceriel J.H. Jacobs (1990) on the web

A few years ago I put on my web page a few toy LR parser written in
that way and an Early parser (in which continuations reified to lists
are executed in a breath-first-search way and is some sense memoized).

There are working parsers that use this technique, including

- Frown, an LALR(k) parser generator for Haskell
Ralf Hinze, Ross Paterson

- The Essence parser generator for Scheme
Mike Sperber (uses partial evaluation instead of code generation)

Both come with interesting papers.

The general idea is that the definition of the search space
(construction of a stack automaton) and its exploration (optimistic -> depth
first search = LR, pessimistic -> breath first search = Earley) are
two orthogonal things.

The Tomita parsers are a bit more difficult since they are equivalent to
a form of memoization inside an implicit enumeration algorithm (or
conversely a form of duplication at ambiguous points inside a
deterministic algorithm).

Speech recognition is a more complicated because of uncertainty but it  
is the same idea.

> Explicit management of the continuation queue/stack/whatever is a
> boon I didn't expect, though! If you come up with a library or  
> modular solution please let me know.

It is rather specific to constraint programming but there is a paper that
describes a system that allows you to specify in a declarative way the
order in which the continuations will be executed. The trick is to
lift the combinatorial engine to the search tree: the execution order
of the continuations is itself a combinatorial program (I believe they
do not bootstrap the solver however but use an ad-hoc mini-solver instead).

ToOLS: A Library for Partial and Hybrid Search Methods (2003)
Simon de Givry, Laurent Jeannin
Proceedings CPAIOR '03 (on Citeseer)

It is only applicable when the continuations are restricted enough to
carry a clear semantics. If you are looking for more low-level things
you should have a look to Oleg's work, and related papers.

Native delimited continuations in (byte-code) OCaml

> I'm looking forward to hearing more about your research, as always.

Well I can at least explain what I have been trying to do ultimately
with continuations and memoization.

Pisinger introduced a very fast class of algorithms for knapsack
problems which are an hybridisation of implicit enumeration (branch and
bound) and dynamic programming.

Knapsack Problems
Hans Kellerer, Ulrich Pferschy, David Pisinger
Springer 2004

[Good overview : New trends in exact algorithms for the 0-1 knapsack
problem. EJOR 123 (2000), on the web]

I want to automatically derive similar algorithms from any implicit
enumeration algorithms thanks to memoization.

There is an additional problem related to combinatorial optimization:
computing an average solution (50% of the optimum) takes 1s,
a good solution (90%) 1 minute, an excellent solution (99%) 1 hour,
the optimal solution 1 day, and the optimality proof 1 week.
But you usually don't need the optimal solution to a subproblem, only
a good enough "lower bound" that enables you either to cut the branch
or to decide to dive in it.

- From the memoization point of view, one has to generalize the
   value table to a improving pair of bounds + continuation stack

   instead of memo : (int -> int) -> int -> int

   you need memo : (int -> int * continuation queue) -> (int, int) ref *
   continuation queue

   if you want the confidence interval [lower bound..upper bound] of a
   subproblem to be refined, you just execute a few more continuations.

- From the implicit enumeration point of view, one has to order the
   continuations in a "structured stack" where the continuations are
   indexed by the sub-problems they are solving, and add a cache.

   You also need to use the relations between the subproblems: the
   optimal solution of a more constrained problem is an upper bound of
   the optimal solution of a less contrained problem.

   For instance:
   min cardinality subsetsum 15 [17;13;7;5;2] == 2
   min cardinality subsetsum 15 [17;15;13;12;11;7;5;2;1] == 1

This gives you upper and lower bounds form the keys of the table and
the cached partial results.

I had the idea when reading papers on adaptive functional programming,
specially those of U. Acar (

Umut A. Acar, Guy E. Blelloch, Matthias Blume, Kanat Tangwongsan.
An Experimental Analysis of Self-Adjusting Computation (PLDI 2006)

         Diego Olivier