tokenizing.qbk 4.5 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
  1. [/==============================================================================
  2. Copyright (C) 2001-2011 Joel de Guzman
  3. Copyright (C) 2001-2011 Hartmut Kaiser
  4. Distributed under the Boost Software License, Version 1.0. (See accompanying
  5. file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
  6. ===============================================================================/]
  7. [section:lexer_tokenizing Tokenizing Input Data]
  8. [heading The tokenize function]
  9. The `tokenize()` function is a helper function simplifying the usage of a lexer
  10. in a stand alone fashion. For instance, you may have a stand alone lexer where all
  11. that functional requirements are implemented inside lexer semantic actions.
  12. A good example for this is the [@../../example/lex/word_count_lexer.cpp word_count_lexer]
  13. described in more detail in the section __sec_lex_quickstart_2__.
  14. [wcl_token_definition]
  15. The construct used to tokenize the given input, while discarding all generated
  16. tokens is a common application of the lexer. For this reason __lex__ exposes an
  17. API function `tokenize()` minimizing the code required:
  18. // Read input from the given file
  19. std::string str (read_from_file(1 == argc ? "word_count.input" : argv[1]));
  20. word_count_tokens<lexer_type> word_count_lexer;
  21. std::string::iterator first = str.begin();
  22. // Tokenize all the input, while discarding all generated tokens
  23. bool r = tokenize(first, str.end(), word_count_lexer);
  24. This code is completely equivalent to the more verbose version as shown in the
  25. section __sec_lex_quickstart_2__. The function `tokenize()` will return either
  26. if the end of the input has been reached (in this case the return value will
  27. be `true`), or if the lexer couldn't match any of the token definitions in the
  28. input (in this case the return value will be `false` and the iterator `first`
  29. will point to the first not matched character in the input sequence).
  30. The prototype of this function is:
  31. template <typename Iterator, typename Lexer>
  32. bool tokenize(Iterator& first, Iterator last, Lexer const& lex
  33. , typename Lexer::char_type const* initial_state = 0);
  34. [variablelist where:
  35. [[Iterator& first] [The beginning of the input sequence to tokenize. The
  36. value of this iterator will be updated by the
  37. lexer, pointing to the first not matched
  38. character of the input after the function
  39. returns.]]
  40. [[Iterator last] [The end of the input sequence to tokenize.]]
  41. [[Lexer const& lex] [The lexer instance to use for tokenization.]]
  42. [[Lexer::char_type const* initial_state]
  43. [This optional parameter can be used to specify
  44. the initial lexer state for tokenization.]]
  45. ]
  46. A second overload of the `tokenize()` function allows specifying of any arbitrary
  47. function or function object to be called for each of the generated tokens. For
  48. some applications this is very useful, as it might avoid having lexer semantic
  49. actions. For an example of how to use this function, please have a look at
  50. [@../../example/lex/word_count_lexer.cpp word_count_functor.cpp]:
  51. [wcf_main]
  52. Here is the prototype of this `tokenize()` function overload:
  53. template <typename Iterator, typename Lexer, typename F>
  54. bool tokenize(Iterator& first, Iterator last, Lexer const& lex, F f
  55. , typename Lexer::char_type const* initial_state = 0);
  56. [variablelist where:
  57. [[Iterator& first] [The beginning of the input sequence to tokenize. The
  58. value of this iterator will be updated by the
  59. lexer, pointing to the first not matched
  60. character of the input after the function
  61. returns.]]
  62. [[Iterator last] [The end of the input sequence to tokenize.]]
  63. [[Lexer const& lex] [The lexer instance to use for tokenization.]]
  64. [[F f] [A function or function object to be called for
  65. each matched token. This function is expected to
  66. have the prototype: `bool f(Lexer::token_type);`.
  67. The `tokenize()` function will return immediately if
  68. `F` returns `false.]]
  69. [[Lexer::char_type const* initial_state]
  70. [This optional parameter can be used to specify
  71. the initial lexer state for tokenization.]]
  72. ]
  73. [/heading The generate_static_dfa function]
  74. [endsect]