bugGrammatica - Bugs: bug #13235, Tokenizer: Add support for regex...

 
 

You are not allowed to post comments on this tracker with your current authentication level.

bug #13235: Tokenizer: Add support for regex macros

Submitted by:  Francis Norton <roundand>
Submitted on:  Tue 31 May 2005 10:06:48 AM UTC  
 
Severity: 5 - MajorItem Group: Future Improvement
Status: PostponedAssigned to: Per Cederberg <cederberg>
Open/Closed: Closed

Sun 05 Apr 2009 04:37:55 PM UTC, comment #3:

This feature request has been merged into the simplified grammar format feature on the Wiki:

http://code.google.com/p/grammatica/wiki/FeatureSimplifiedGrammarFormat

Closing this issue here, as further discussion is on the Wiki.

Per Cederberg <cederberg>
Project AdministratorIn charge of this item.
Tue 31 May 2005 09:09:32 PM UTC, comment #2:

I would definitely prefer the %macros% option -

[a] it would be compatible with some interesting, existing flex-style grammars, eg in W3C specifications

[b] I'm guessing that it would be more efficient to resolve these patterns as tokens than as productions

[c] as you say, it would help keep the productions section as simple as possible.

I do like the idea of modular grammars, but even so I would rather keep things that are uninteresting as productions out of any productions section.

Maybe it's time for me to brush the dust off my ancient java skills...

Francis Norton <roundand>
Tue 31 May 2005 07:10:48 PM UTC, comment #1:

This is an interesting suggestion. Probably not very tricky to implement either. The main reason for it not being there already is that it proved mostly useless for the grammars that I've created myself.

I guess one way to do this would be to add a new %macro% section, where one simply defines new regex macros similar to how regexes are defined:

%macros%
UNICODE_ESCAPE = <<\\[0-9a-fA-F]{1,4}>>
...

Then the macros could be used inside regex tokens:

%tokens%
STRING = <<\"{UNICODE_ESCAPE}...>>

Now, it could also be argued that it would be better to represent complex tokens such as these by using productions instead. By letting UNICODE_ESCAPE become a token and String a production, the grammar becomes much clearer and readable. It may be however, that such a change increases complexity of other productions so much that it defeats the purpose.

Possibly the best solution to this issue is to introduce support for modular grammars instead of macros. That way, complex tokens could easily be expressed as productions in a separate grammar file. See bug #3599 and bug #3604 for more about the implications of this.

Per Cederberg <cederberg>
Project AdministratorIn charge of this item.
Tue 31 May 2005 10:06:48 AM UTC, original submission:

Many W3C and other specs regex macros to clarify their tokeizer rules, eg at http://www.w3.org/TR/CSS1#appendix-b we have:

---
unicode \\[0-9a-f]{1,4}
latin1 [¡-ÿ]
escape {unicode}|\\[ -~¡-ÿ]
stringchar {escape}|{latin1}|[ !#$%&(-~]
nmstrt [a-z]|{latin1}|{escape}
nmchar [-a-z0-9]|{latin1}|{escape}
ident {nmstrt}{nmchar}*
name {nmchar}+
d [0-9]
notnm [^-a-z0-9\\]|{latin1}
w [ \t\n]*
num {d}+|{d}*\.{d}+
string \"({stringchar}|\')\"|\'({stringchar}|\")\'
---

This obviously makes the semantics clearer, as well as making the regexes easier to maintain and less error-prone than will be the case once all the macros are expanded by hand.

Francis Norton <roundand>

 

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by cederberg (Posted a comment)
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 0 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 5 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Sun 05 Apr 2009 04:37:55 PM UTCcederbergStatusNone=>Postponed
      Assigned toNone=>cederberg
      Open/ClosedOpen=>Closed
      Discussion LockUnlocked=>Locked
    Tue 31 May 2005 07:10:48 PM UTCcederbergSummaryrequest for macros in the tokenizer=>Tokenizer: Add support for regex macros

    Back to the top


    Powered by Savane 3.1-cleanup1