[
Home
]
[ Index:
by date
|
by threads
]
[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: | 2008-03-19 (15:21) |
From: | Martin Jambon <martin.jambon@e...> |
Subject: | Re: [Caml-list] ocamllex regexp problem |
On Tue, 18 Mar 2008, Jake Donham wrote: > Hi list, > > I am trying to parse an RSS feed using OCaml-RSS, which uses XML-Light, > which however does not support CDATA blocks. So I added support in the > ocamllex-based lexer as follows: > > let ends_sq = [^']']* ']' > let ends_sq_sq = ends_sq ([^']'] ends_sq)* ']'+ > let ends_sq_sq_ang = ends_sq_sq ([^'>'] ends_sq_sq)* '>' > > or expanded: > > let ends_sq_sq_ang = (([^']']*']') ([^']'] ([^']']*']'))* ']'+) ([^'>'] > (([^']']*']') ([^']'] ([^']']*']'))* ']'+))* '>' > > rule token = parse > [...] > | "<![CDATA[" (ends_sq_sq_ang as data) > [...] > > Here ends_sq_sq_ang is supposed to match strings ending in ]]> which may > contain ] and >. If I give it an input like "foo]]]>bar]]>" (note the extra > square bracket after foo), ocamllex matches the whole input instead of just > "foo]]]>" as I would expect. But Micmatch, when given the same regexp, does > the right thing. (The ']'+ bits are supposed to handle the "]]]>" case.) > > I have probably done something stupid and am embarrassing myself by > advertising it to the list, but I did check it carefully. Any idea why this > doesn't work? Thanks, It's interesting. Note that both solutions are correct. Using "shortest" instead of "parse" returns the shorter solution for this particular example. That may solve your problem. In general, I find it hard to predict which solution should pop up earlier when some complex backtracking is involved, independently from any theoretical reasons. My advice would be to use PCRE (from micmatch) for line-oriented parsing and take advantage of lazy quantifiers and assertions or ocamllex when end-of-lines are insignificant and things are nicely nested. If it's not so simple, try to make several passes, possibly starting by discovering blocks based on indentation and then parse each block afterwards using another technique. When in addition you have to extract the most out of your data even if some syntax errors are present, it gets hard. When you must tolerate these errors exactly in the same way as an existing dominant implementation (such as Mediawiki), it tends to become impossible. Martin -- http://wink.com/profile/mjambon http://martin.jambon.free.fr