|
NAMEleex - Lexical analyzer generator for ErlangDESCRIPTIONA regular expression based lexical analyzer generator for Erlang, similar to lex or flex.Note:
The Leex module should be considered experimental as it will be subject to
changes in future releases.
DATA TYPESerror_info() = {erl_anno:line() | none, module(), ErrorDescriptor :: term()} The standard error_info() structure that is returned from all I/O modules. ErrorDescriptor is formattable by format_error/1. EXPORTSfile(FileName) -> leex_ret() file(FileName, Options) -> leex_ret() Types: FileName = file:filename()
Options = Option | [Option] Option = {dfa_graph, boolean()} | {includefile, Includefile :: file:filename()} | {report_errors, boolean()} | {report_warnings, boolean()} | {report, boolean()} | {return_errors, boolean()} | {return_warnings, boolean()} | {return, boolean()} | {scannerfile, Scannerfile :: file:filename()} | {verbose, boolean()} | {warnings_as_errors, boolean()} | dfa_graph | report_errors | report_warnings | report | return_errors | return_warnings | return | verbose | warnings_as_errors leex_ret() = ok_ret() | error_ret() ok_ret() = {ok, Scannerfile :: file:filename()} | {ok, Scannerfile :: file:filename(), warnings()} error_ret() = error | {error, Errors :: errors(), Warnings :: warnings()} errors() = [{file:filename(), [error_info()]}] warnings() = [{file:filename(), [error_info()]}] Generates a lexical analyzer from the definition in the input file. The input file has the extension .xrl. This is added to the filename if it is not given. The resulting module is the Xrl filename without the .xrl extension. The current options are:
Any of the Boolean options can be set to true by stating the name of the option. For example, verbose is equivalent to {verbose, true}. Leex will add the extension .hrl to the Includefile name and the extension .erl to the Scannerfile name, unless the extension is already there. format_error(ErrorDescriptor) -> io_lib:chars() Types: ErrorDescriptor = term()
Returns a descriptive string in English of an error reason ErrorDescriptor returned by leex:file/1,2 when there is an error in a regular expression. GENERATED SCANNER EXPORTSThe following functions are exported by the generated scanner.EXPORTSModule:string(String) -> StringRetModule:string(String, StartLine) -> StringRet Types: String = string()
StringRet = {ok,Tokens,EndLine} | ErrorInfo Tokens = [Token] EndLine = StartLine = erl_anno:line() Scans String and returns all the tokens in it, or an error. Note:
It is an error if not all of the characters in String are consumed.
Module:token(Cont, Chars) -> {more,Cont1} |
{done,TokenRet,RestChars}
Types: Cont = [] | Cont1
Cont1 = tuple() Chars = RestChars = string() | eof TokenRet = {ok, Token, EndLine} | {eof, EndLine} | ErrorInfo StartLine = EndLine = erl_anno:line() This is a re-entrant call to try and scan one token from Chars. If there are enough characters in Chars to either scan a token or detect an error then this will be returned with {done,...}. Otherwise {cont,Cont} will be returned where Cont is used in the next call to token() with more characters to try an scan the token. This is continued until a token has been scanned. Cont is initially []. It is not designed to be called directly by an application but used through the i/o system where it can typically be called in an application by: io:request(InFile, {get_until,unicode,Prompt,Module,token,[Line]}) -> TokenRet Module:tokens(Cont, Chars) -> {more,Cont1} |
{done,TokensRet,RestChars}
Types: Cont = [] | Cont1
Cont1 = tuple() Chars = RestChars = string() | eof TokensRet = {ok, Tokens, EndLine} | {eof, EndLine} | ErrorInfo Tokens = [Token] StartLine = EndLine = erl_anno:line() This is a re-entrant call to try and scan tokens from Chars. If there are enough characters in Chars to either scan tokens or detect an error then this will be returned with {done,...}. Otherwise {cont,Cont} will be returned where Cont is used in the next call to tokens() with more characters to try an scan the tokens. This is continued until all tokens have been scanned. Cont is initially []. This functions differs from token in that it will continue to scan tokens upto and including an {end_token,Token} has been scanned (see next section). It will then return all the tokens. This is typically used for scanning grammars like Erlang where there is an explicit end token, '.'. If no end token is found then the whole file will be scanned and returned. If an error occurs then all tokens upto and including the next end token will be skipped. It is not designed to be called directly by an application but used through the i/o system where it can typically be called in an application by: io:request(InFile, {get_until,unicode,Prompt,Module,tokens,[Line]}) -> TokensRet DEFAULT LEEX OPTIONSThe (host operating system) environment variable ERL_COMPILER_OPTIONS can be used to give default Leex options. Its value must be a valid Erlang term. If the value is a list, it is used as is. If it is not a list, it is put into a list.The list is appended to any options given to file/2. The list can be retrieved with compile:env_compiler_options/0. INPUT FILE FORMATErlang style comments starting with a % are allowed in scanner files. A definition file has the following format:<Header> Definitions. <Macro Definitions> Rules. <Token Rules> Erlang code. <Erlang code> The "Definitions.", "Rules." and "Erlang code." headings are mandatory and must occur at the beginning of a source line. The <Header>, <Macro Definitions> and <Erlang code> sections may be empty but there must be at least one rule. Macro definitions have the following format: NAME = VALUE and there must be spaces around =. Macros can be used in the regular expressions of rules by writing {NAME}. Note:
When macros are expanded in expressions the macro calls are replaced by the
macro value without any form of quoting or enclosing in parentheses.
Rules have the following format: <Regexp> : <Erlang code>. The <Regexp> must occur at the start of a line and not include any blanks; use \t and \s to include TAB and SPACE characters in the regular expression. If <Regexp> matches then the corresponding <Erlang code> is evaluated to generate a token. With the Erlang code the following predefined variables are available:
The code must return:
It is also possible to push back characters into the input characters with the following returns:
These have the same meanings as the normal returns but the characters in PushBackList will be prepended to the input characters and scanned for the next token. Note that pushing back a newline will mean the line numbering will no longer be correct. Note:
Pushing back characters gives you unexpected possibilities to cause the scanner
to loop!
The following example would match a simple Erlang integer or float and return a token which could be sent to the Erlang parser: D = [0-9] {D}+ : {token,{integer,TokenLine,list_to_integer(TokenChars)}}. {D}+\.{D}+((E|e)(\+|\-)?{D}+)? : {token,{float,TokenLine,list_to_float(TokenChars)}}. The Erlang code in the "Erlang code." section is written into the output file directly after the module declaration and predefined exports declaration so it is possible to add extra exports, define imports and other attributes which are then visible in the whole file. REGULAR EXPRESSIONSThe regular expressions allowed here is a subset of the set found in egrep and in the AWK programming language, as defined in the book, The AWK Programming Language, by A. V. Aho, B. W. Kernighan, P. J. Weinberger. They are composed of the following characters:
The escape sequences allowed are the same as for Erlang strings:
The following examples define simplified versions of a few Erlang data types: Atoms [a-z][0-9a-zA-Z_]* Variables [A-Z_][0-9a-zA-Z_]* Floats (\+|-)?[0-9]+\.[0-9]+((E|e)(\+|-)?[0-9]+)? Note:
Anchoring a regular expression with ^ and $ is not implemented in
the current version of Leex and just generates a parse error.
Visit the GSP FreeBSD Man Page Interface. |