Sun 22 May 2005 09:32:46 AM UTC, comment #1:
Hmm... This issue seems more convoluted that I first anticipated. It turns out that modern RE:s aren't regular expressions in the linguistic sense anymore. Actually, they are quite a bit more expressive and impossible to handle correctly with a simple DFA model or with the linguistically equivalent NFA.
So the solution a lot of people have adopted is to have a DFA implementation for the simple RE:s where it works, but using a backtracking pseudo-NFA for the complex RE:s. Or a hybrid that uses DFA:s for parts of the RE where it works, but falls back to backtracking when not. These solutions have exponential performance in the worst case, but with some tricky optimization one might get a quite speedy implementation for most normal use.
Also a choice must be made regarding the handling of alternatives. Strict POSIX compliance requires evaluating all alternatives and choosing the longest one, whereas Perl and many other languages pick the alternatives in order. Grammatica currently uses a hybrid approach, due to oversight of some of the POSIX complexities. I suspect the traditional Perl way is best for Grammatica, but care must be taken to properly document the RE implementation this time around.
I'm uncertain yet on what the complete solution Grammatica should be. It seems quite a substantial rewrite is needed for the current backtracking interpreter, as it may experience stack overflows on large matches (using the stack for backtracking). Either the heap should be used for storing backtracking information or the various branches should be executed in parallell in some kind of priority queue.
Finally, some interesting articles on the subject to get back to when I start to implement this:
http://www.oreilly.com/catalog/regex/chapter/ch04.html
http://dev.perl.org/perl6/doc/design/apo/A05.html
|