Version française
Home     About     Download     Resources     Contact us    
Browse thread
Re: Why OCaml sucks
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Robert Fischer <robert@f...>
Subject: Re: [Caml-list] Re: Why OCaml sucks
> The problem, as I understand it, is in writting parsers.  Your standard
> finite automata based regular expression library or lexical analyzer is
> based, at it's heart, on a table lookup- you have a 2D array, whose size
> is the number of input characters times the number of states.  For ASCII
> input, the number of possible input characters is small- 256 at most. 
> 256 input characters times hundreds of states isn't that big of a table-
> we're looking at sizes in 10's of K- easily handlable even in the bad
> old days of 64K segments.  Even going to UTF-16 ups the number of input
> characters from 256 to 65,536- and now a moderately large state machine
> (hundreds of states) weighs in at tens of megabytes of table space. 
> And, of course, if you try to handle the entire 31-bit full unicode
> point space, welcome to really large tables :-).
> 
> The solution, I think, is to change the implementation of your finite
> automata to use some data structure smarter than a flat 2D array, but
> that's me.
> 
Yes.  It is certainly possible to write slow code to solve this problem.

A slightly more involved analysis is probably in order, so let's ask Wikipedia for some more light.

http://en.wikipedia.org/wiki/UTF-8#Rationale_behind_UTF-8.27s_design

As Kuba pointed out, the high bit is 0 on any ASCII characters, and the significant bits of a
multi-byte sequence determine the length of the sequence.  There are also a few large classes of bit
sequences which are simply not allowed.  Now, I'm not an expert in writing parsers, but these
qualities certainly sound like nice optimization hooks.

Getting back to the original question, though -- is there any evidence that Java/C# are slow because
of unicode support, and not because of other aspects of the languages?  Because that assertion seems
flat-out bogus to me.

~~ Robert.