parametric_test_case_generation.qbk 27 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571
  1. [/
  2. / Copyright (c) 2003 Boost.Test contributors
  3. /
  4. / Distributed under the Boost Software License, Version 1.0. (See accompanying
  5. / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
  6. /]
  7. [section:test_case_generation Data-driven test cases]
  8. [h4 Why data-driven test cases?]
  9. Some tests are required to be repeated for a series of different input parameters. One way to achieve this is
  10. manually register a test case for each parameter. You can also invoke a test function with
  11. all parameters manually from within your test case, like this:
  12. ``
  13. void single_test( int i )
  14. {
  15. __BOOST_TEST__( /* test assertion */ );
  16. }
  17. void combined_test()
  18. {
  19. int params[] = { 1, 2, 3, 4, 5 };
  20. std::for_each( params, params+5, &single_test );
  21. }
  22. ``
  23. The approach above has several drawbacks:
  24. * the logic for running the tests is inside a test itself: `single_test` in the above example is run from the test
  25. case `combined_test` while its execution would be better handled by the __UTF__
  26. * in case of fatal failure for one of the values in `param` array above (say a failure in __BOOST_TEST_REQUIRE__),
  27. the test `combined_test` is aborted and the next test-case in the test tree is executed.
  28. * in case of failure, the reporting is not accurate enough: the test should certainly be reran during debugging
  29. sessions by a human or additional logic for reporting should be implemented in the test itself.
  30. [h4 Parameter generation, scalability and composition]
  31. In some circumstance, one would like to run a parametrized test over an /arbitrary large/ set of values. Enumerating the
  32. parameters by hand is not a solution that scales well, especially when these parameters can be described in another
  33. function that generates these values. However, this solution has also limitations
  34. * *Generating functions*: suppose we have a function `func(float f)`, where `f` is any number in [0, 1]. We are not
  35. interested that much in the exact value, but we would like to test `func`. What about, instead of writing the `f`
  36. for which `func` will be tested against, we choose randomly `f` in [0, 1]? And also what about instead of having
  37. only one value for `f`, we run the test on arbitrarily many numbers? We easily understand from this small example
  38. that tests requiring parameters are more powerful when, instead of writing down constant values in the test, a
  39. generating function is provided.
  40. * *Scalability*: suppose we have a test case on `func1`, on which we test `N` values written as constant in the test
  41. file. What does the test ensure? We have the guaranty that `func1` is working on these `N` values. Yet in this
  42. setting `N` is necessarily finite and usually small. How would we extend or scale `N` easily? One solution is to
  43. be able to generate new values, and to be able to define a test on the *class* of possible inputs for `func1` on
  44. which the function should have a defined behavior. To some extent, `N` constant written down in the test are just
  45. an excerpt of the possible inputs of `func1`, and working on the class of inputs gives more flexibility and power
  46. to the test.
  47. * *Composition*: suppose we already have test cases for two functions `func1` and `func2`, taking as argument the
  48. types `T1` and `T2` respectively. Now we would like to test a new functions `func3` that takes as argument a type
  49. `T3` containing `T1` and `T2`, and calling `func1` and `func2` through a known algorithm. An example of such a
  50. setting would be
  51. ``
  52. // Returns the log of x
  53. // Precondition: x strictly positive.
  54. double fast_log(double x);
  55. // Returns 1/(x-1)
  56. // Precondition: x != 1
  57. double fast_inv(double x);
  58. struct dummy {
  59. unsigned int field1;
  60. unsigned int field2;
  61. };
  62. double func3(dummy value)
  63. {
  64. return 0.5 * (exp(fast_log(value.field1))/value.field1 + value.field2/fast_inv(value.field2));
  65. }
  66. ``
  67. In this example,
  68. * `func3` inherits from the preconditions of `fast_log` and `fast_inv`: it is defined in `(0, +infinity)` and in `[-C, +C] - {1}` for `field1` and `field2` respectively (`C`
  69. being a constant arbitrarily big).
  70. * as defined above, `func3` should be close to 1 everywhere on its definition domain.
  71. * we would like to reuse the properties of `fast_log` and `fast_inv` in the compound function `func3` and assert that `func3` is well defined over an arbitrary large definition domain.
  72. Having parametrized tests on `func3` hardly tells us about the possible numerical properties or instabilities close to the point `{field1 = 0, field2 = 1}`.
  73. Indeed, the parametrized test may test for some points around (0,1), but will fail to provide an *asymptotic behavior* of the function close to this point.
  74. [h4 Data driven tests in the Boost.Test framework]
  75. The facilities provided by the __UTF__ addressed the issues described above:
  76. * the notion of *datasets* eases the description of the class of inputs for test cases. The datasets also implement several
  77. operations that enable their combinations to create new, more complex datasets,
  78. * two macros, __BOOST_DATA_TEST_CASE__ and __BOOST_DATA_TEST_CASE_F__, respectively without and with fixture support,
  79. are used for the declaration and registration of a test case over a collection of values (samples),
  80. * each test case, associated to a unique value, is executed independently from others. These tests are guarded in the same
  81. way regular test cases are, which makes the execution of the tests over each sample of a dataset isolated, robust,
  82. repeatable and ease the debugging,
  83. * several datasets generating functions are provided by the __UTF__
  84. The remainder of this section covers the notions and feature provided by the __UTF__ about the data-driven test cases, in
  85. particular:
  86. # the notion of [link boost_test.tests_organization.test_cases.test_case_generation.datasets *dataset* and *sample*] is introduced
  87. # [link boost_test.tests_organization.test_cases.test_case_generation.datasets_auto_registration the declaration and registration]
  88. of the data-driven test cases are explained,
  89. # the [link boost_test.tests_organization.test_cases.test_case_generation.operations /operations/] on datasets are detailed
  90. # and finally the built-in [link boost_test.tests_organization.test_cases.test_case_generation.generators dataset generators]
  91. are introduced.
  92. [/ ################################################################################################################################## ]
  93. [section Datasets]
  94. To define properly datasets, the notion of *sample* should be introduced first. A *sample* is defined as /polymorphic tuple/.
  95. The size of the tuple will be by definition the *arity* of the sample itself.
  96. A *dataset* is a /collection of samples/, that
  97. * is forward iterable,
  98. * can be queried for its `size` which in turn can be infinite,
  99. * has an arity which is the arity of the samples it contains.
  100. Hence the dataset implements the notion of /sequence/.
  101. The descriptive power of the datasets in __UTF__ comes from
  102. * the [link boost_test.tests_organization.test_cases.test_case_generation.datasets.dataset_interface interface] for creating a custom datasets, which is quite simple,
  103. * the [link boost_test.tests_organization.test_cases.test_case_generation.operations operations] they provide for combining different datasets
  104. * their interface with other type of collections (`stl` containers, `C` arrays)
  105. * the available built-in [link boost_test.tests_organization.test_cases.test_case_generation.generators /dataset generators/]
  106. [tip Only "monomorphic" datasets are supported, which means that all samples within a single dataset have the same type and same arity
  107. [footnote polymorphic datasets will be considered in the future. Their need is mainly driven by the replacement of the
  108. [link boost_test.tests_organization.test_cases.test_organization_templates typed parametrized test cases] by the dataset-like API.]
  109. . However, dataset of different sample types may be combined together with zip and cartesian product.
  110. ]
  111. As we will see in the next sections, datasets representing collections of different types may be combined together (e.g.. /zip/ or /grid/).
  112. These operations result in new datasets, in which the samples are of an augmented type.
  113. [/ ###################################################################### ]
  114. [section Dataset interface]
  115. The interface of the /dataset/ should implement the two following functions/fields:
  116. * `iterator begin()` where /iterator/ is a forward iterator,
  117. * `boost::unit_test::data::size_t size() const` indicates the size of the dataset. The returned type is a dedicated
  118. class [classref boost::unit_test::data::size_t size_t] that can indicate an /infinite/ dataset size.
  119. * an enum called `arity` indicating the arity of the samples returned by the dataset
  120. Once a dataset class `D` is declared, it should be registered to the framework by specializing the template class
  121. ``boost::unit_test::data::monomorphic::is_dataset``
  122. with the condition that ``boost::unit_test::data::monomorphic::is_dataset<D>::value`` evaluates to `true`.
  123. The following example implements a custom dataset generating a Fibonacci sequence.
  124. [bt_example dataset_example68..Example of custom dataset..run-fail]
  125. [endsect]
  126. [/ ###################################################################### ]
  127. [section Dataset creation and delayed creation]
  128. Datasets as defined above are constructed before even the test module starts its execution as global objects. This makes impossible to access,
  129. from within the dataset generator and during their iteration, elements like `argc` / `argv`, the
  130. [link boost_test.tests_organization.test_tree.master_test_suite master test suite] (and the preprocessed `argc` / `argv`), or any other object
  131. that has been instantiated after the `main` of the test module entry.
  132. To overcome this, a [*delayed] dataset instantiation interface has been introduced. This effectively wraps the dataset inside another one,
  133. which [*lazyly] instantiates the dataset.
  134. To instantiate a delayed dataset, the [funcref boost::unit_test::data::monomorphic::make_delayed] function should be used in the
  135. __BOOST_DATA_TEST_CASE__ call. The following snippet:
  136. ```
  137. BOOST_DATA_TEST_CASE(dataset_test_case,
  138. boost::unit_test::data::make_delayed<custom_dataset>(arg1, ... ), ...)
  139. {
  140. }
  141. ```
  142. creates a delayed dataset test case with a generator of type `custom_dataset`. The generator is ['lazily] constructed
  143. with `arg1`, `...`.
  144. [tip A detailed example of delayed creation is given in the section about [link boost_test.runtime_config.custom_command_line_arguments custom command line]
  145. arguments.]
  146. [tip See the class [classref boost::unit_test::data::monomorphic::delayed_dataset `monomorphic::delayed_dataset`] for more details on the
  147. wrapping object.]
  148. [endsect]
  149. [endsect] [/ datasets]
  150. [/ ################################################################################################################################## ]
  151. [/ Main code import for this section ]
  152. [import ../snippet/dataset_1/test_file.cpp]
  153. [/ ################################################################################################################################## ]
  154. [section:datasets_auto_registration Declaring and registering test cases with datasets]
  155. In order to declare and register a data-driven test-case, the macros __BOOST_DATA_TEST_CASE__ or __BOOST_DATA_TEST_CASE_F__
  156. should be used. Those two forms are equivalent, with the difference that `BOOST_DATA_TEST_CASE_F` supports fixtures.
  157. Those macros are variadic and can be used in the following forms:
  158. ``
  159. __BOOST_DATA_TEST_CASE__(test_case_name, dataset) { /* dataset1 of arity 1 */ }
  160. BOOST_DATA_TEST_CASE(test_case_name, dataset, var1) { /* datasets of arity 1 */ }
  161. BOOST_DATA_TEST_CASE(test_case_name, dataset, var1, ..., varN) { /* datasets of arity N */ }
  162. __BOOST_DATA_TEST_CASE_F__(fixture, test_case_name, dataset) { /* dataset1 of arity 1 with fixture */ }
  163. BOOST_DATA_TEST_CASE_F(fixture, test_case_name, dataset, var1) { /* dataset1 of arity 1 with fixture */ }
  164. BOOST_DATA_TEST_CASE_F(fixture, test_case_name, dataset, var1, ..., varN) { /* dataset1 of arity N with fixture */ }
  165. ``
  166. The first form of the macro is for datasets of arity 1. The value of the sample being executed by the test body is
  167. available through the automatic variable `sample` (`xrange` is as its name suggests a range of values):
  168. [snippet_dataset1_1]
  169. The second form is also for datasets of arity 1, but instead of the variable `sample`, the current sample is brought into `var1`:
  170. [snippet_dataset1_2]
  171. The third form is an extension of the previous form for datasets of arity `N`. The sample being a polymorphic tuple, each
  172. of the variables `var1`, ..., `varN` corresponds to the index 1, ... `N` of the the sample:
  173. [snippet_dataset1_3]
  174. The next three forms of declaration, with `BOOST_DATA_TEST_CASE_F`, are equivalent to the previous ones, with the difference being in the support of
  175. a fixture that is execute before the test body for each sample. The fixture should follow the expected interface as detailed
  176. [link boost_test.tests_organization.fixtures.models here].
  177. The arity of the dataset and the number of variables should be exactly the same, the first form being a short-cut for the
  178. case of arity 1.
  179. [tip A compilation-time check is performed on the coherence of the arity of the dataset and the number of variables `var1`... `varN`.
  180. For compilers *without C++11* support, the maximal supported arity is controlled by the macro
  181. __BOOST_TEST_DATASET_MAX_ARITY__, that can be overridden /prior/ to including the __UTF__ headers.]
  182. [caution The macros __BOOST_DATA_TEST_CASE__ and __BOOST_DATA_TEST_CASE_F__ are available only for compilers with support for *variadic macros*.]
  183. [h4 Samples and test tree]
  184. It should be emphasized that those macros do not declare a single test case (as __BOOST_AUTO_TEST_CASE__ would do) but declare and
  185. register as many test cases as there are samples in the dataset given in argument. Each test case runs on exactly *one*
  186. sample of the dataset.
  187. More precisely, what
  188. ``__BOOST_DATA_TEST_CASE__(test_case_name, dataset)``
  189. does is the following:
  190. * it registers a *test suite* named "`test_case_name`",
  191. * it registers as many test cases as they are in "`dataset`", each of which with the name corresponding to the index of the sample
  192. in the database prefixed by `_` and starting at index `0` ("`_0`", "`_1`", ... "`_(N-1)`" where `N` is the size of the dataset)
  193. This make it easy to:
  194. * identify which sample is failing (say "`test_case_name/_3`"),
  195. * replay the test for one or several samples (or the full dataset) from the command line using the [link boost_test.runtime_config.test_unit_filtering test filtering features] provided by the __UTF__,
  196. * apply a [link boost_test.tests_organization.decorators.explicit_decorator_declaration decorator] to each individual test cases of the
  197. dataset, as the decorator would apply to the test suite.
  198. Exactly as regular test cases, each test case (associated to a specific sample) is executed in /monitored manner/:
  199. * the test execution are independent: if an error occurs for one sample, the remaining samples execution is not affected,
  200. * in case of error, the [link boost_test.test_output.test_tools_support_for_logging.contexts context] along with the index of the sample
  201. within which the error occurred is reported in the [link boost_test.test_output log].
  202. This context contains the sample names and values for which the test failed, which would ease the debugging.
  203. [endsect]
  204. [/ ################################################################################################################################## ]
  205. [section:operations Operations on dataset]
  206. As mentioned earlier, one of the major aspects of using the __UTF__ datasets lies in the number of operations provided
  207. for their combination.
  208. For that purpose, three operators are provided:
  209. * joins with `operator+`
  210. * zips with `operator^` on datasets
  211. * and grids or Cartesian products with `operator*`
  212. [tip All these operators are associative, which enables their combination without parenthesis. However, the precedence rule on the
  213. operators for the language still apply. ]
  214. [section Joins]
  215. A ['join], denoted `+`, is an operation on two datasets `dsa` and `dsb` of same arity and compatible types, resulting in the *concatenation* of these two datasets `dsa` and `dsb`
  216. from the left to the right order of the symbol `+`:
  217. ``
  218. dsa = (a_1, a_2, ... a_i)
  219. dsb = (b_1, b_2, ... b_j)
  220. dsa + dsb = (a_1, a_2, ... a_i, b_1, b_2, ... b_j)
  221. ``
  222. The following properties hold:
  223. * the resulting dataset is of same arity as the operand datasets,
  224. * the size of the returned dataset is the sum of the size of the joined datasets,
  225. * the operation is associative, and it is possible to combine more than two datasets in one expression. The following joins are equivalent for any datasets `dsa`, `dsb` and `dsc`:
  226. ``
  227. ( dsa + dsb ) + dsc
  228. == dsa + ( dsb + dsc )
  229. == dsa + dsb + dsc
  230. ``
  231. [warning In the expression `dsa + dsb`, `dsa` and/or `dsb` can be of infinite size. The resulting dataset will have an infinite size as well. If `dsa` is infinite, the content of
  232. `dsb` will never be reached. ]
  233. [bt_example dataset_example62..Example of join on datasets..run]
  234. [endsect]
  235. [section Zips]
  236. A ['zip], denoted `^` , is an operation on two datasets `dsa` and `dsb` of same arity and same size, resulting in a dataset where the `k`-th sample of `dsa` is paired with the corresponding `k`-th sample of `dsb`.
  237. The resulting dataset samples order follows the left to right order against the symbol `^`.
  238. ``
  239. dsa = (a_1, a_2, ... a_i)
  240. dsb = (b_1, b_2, ... b_i)
  241. dsa ^ dsb = ( (a_1, b_1), (a_2, b_2) ... (a_i, b_i) )
  242. ``
  243. The following properties hold:
  244. * the arity of the resulting dataset is the sum of the arities of the operand datasets,
  245. * the size of the resulting dataset is equal to the size of the datasets (since they are supposed to be of the same size),
  246. exception made for the case the operand datasets size mismatch (see below),
  247. * the operation is associative, and it is possible to combine more than two datasets in one expression,
  248. ``
  249. ( dsa ^ dsb ) ^ dsc
  250. == dsa ^ ( dsb ^ dsc )
  251. == dsa ^ dsb ^ dsc
  252. ``
  253. A particular handling is performed if `dsa` and `dsb` are of different size. The rule is as follow:
  254. * if the both zipped datasets have the same size, this is the size of the resulting dataset (this size can then be infinite).
  255. * otherwise if one of the dataset is of size 1 (singleton) or of infinite size, the resulting size is governed by the other dataset.
  256. * otherwise an exception is thrown at runtime
  257. [caution If the /zip/ operation is not supported for your compiler, the macro [macroref BOOST_TEST_NO_ZIP_COMPOSITION_AVAILABLE `BOOST_TEST_NO_ZIP_COMPOSITION_AVAILABLE`]
  258. will be automatically set by the __UTF__]
  259. [bt_example dataset_example61..Example of zip on datasets..run]
  260. [endsect] [/ zip operation on datasets]
  261. [section Grid (Cartesian products)]
  262. A ['grid], denoted `*` , is an operation on two any datasets `dsa` and `dsb` resulting in a dataset where each sample of `dsa` is paired with each sample of `dsb`
  263. exactly once. The resulting dataset samples order follows the left to right order against the symbol `*`. The rightmost dataset samples are iterated first.
  264. ``
  265. dsa = (a_1, a_2, ... a_i)
  266. dsb = (b_1, b_2, ... b_j)
  267. dsa * dsb = ((a_1, b_1), (a_1, b_2) ... (a_1, b_j), (a_2, b_1), ... (a_2, b_j) ... (a_i, b_1), ... (a_i, b_j))
  268. ``
  269. The grid hence is similar to the mathematical notion of Cartesian product [footnote if the sequence is viewed as a set].
  270. The following properties hold:
  271. * the arity of the resulting dataset is the sum of the arities of the operand datasets,
  272. * the size of the resulting dataset is the product of the sizes of the datasets,
  273. * the operation is associative, and it is possible to combine more than two datasets in one expression,
  274. * as for /zip/, there is no need the dataset to have the same type of samples.
  275. [caution If the /grid/ operation is not supported for your compiler, the macro [macroref BOOST_TEST_NO_GRID_COMPOSITION_AVAILABLE `BOOST_TEST_NO_GRID_COMPOSITION_AVAILABLE`]
  276. will be automatically set by the __UTF__]
  277. In the following example, the random number generator is the second dataset. Its state is evaluated 6 times (3 times for the first `xrange` - first dimension -
  278. and twice for the second `xrange` - second dimension - to which it is zipped). Note that the state of the random engine is
  279. not copied between two successive evaluations of the first dimension.
  280. [bt_example dataset_example64..Example of Cartesian product..run-fail]
  281. [endsect]
  282. [endsect] [/ operations on dataset]
  283. [/ ################################################################################################################################## ]
  284. [section:generators Datasets generators]
  285. Several ['generators] for datasets are implemented in __UTF__:
  286. * [link boost_test.tests_organization.test_cases.test_case_generation.generators.singletons Singletons]
  287. * [link boost_test.tests_organization.test_cases.test_case_generation.generators.stl `forward iterable`] containers and
  288. [link boost_test.tests_organization.test_cases.test_case_generation.generators.c_arrays `C` array] like datasets
  289. * [link boost_test.tests_organization.test_cases.test_case_generation.generators.ranges ranges] or sequences of values
  290. * datasets made of [link boost_test.tests_organization.test_cases.test_case_generation.generators.random random numbers] and following a particular distribution
  291. `stl` and `C-array` generators are merely a dataset view on existing collection, while ranges and random number sequences are
  292. describing new datasets.
  293. [/ ################################################################################################################################## ]
  294. [h4:singletons Singletons]
  295. A singleton is a dataset containing a unique value. The size and arity of such a dataset is 1. This value can be
  296. * either consumed once
  297. * or repeated as many times as needed in a zip operation
  298. As mentioned in /zip/, when zipped with a distribution of infinite size, the resulting dataset will have
  299. a size of 1.
  300. The singleton is constructible through the function [funcref boost::unit_test::data::make].
  301. [bt_example dataset_example65..Singleton..run]
  302. [/ ################################################################################################################################## ]
  303. [h4:c_arrays Datasets from C arrays]
  304. This type of datasets does not contains the logic for generating the sequence of values, and is used as a wrapper on an existing
  305. sequence contained in a `C` array. The arity is 1 and the size is the size of the array.
  306. Such datasets are simply constructed from an overload of the [funcref boost::unit_test::data::make `make`] function.
  307. [bt_example dataset_example66..Array..run]
  308. [/ ################################################################################################################################## ]
  309. [h4:stl Datasets from forward iterable containers]
  310. As for `C` arrays, this type of datasets does not contain the logic for generating sequence of values, and are used for parsing an existing sequence.
  311. The arity is 1 and the size is the same as the one of the container.
  312. [tip C++11 implementation enables the dataset generation from any container which iterator implements the forward iterator concept.
  313. For C++03, the feature is enabled on most STL containers.]
  314. [bt_example dataset_example67..Dataset from `std::vector` and `std::map`..run]
  315. [/ ################################################################################################################################## ]
  316. [h4:ranges Ranges]
  317. A range is a dataset that implements a sequence of equally spaced values, defined by a /start/, and /end/ and a /step/.
  318. It is possible to construct a range using the factory [funcref boost::unit_test::data::xrange], available in the overloads below:
  319. ``
  320. #include <boost/test/data/test_case.hpp>
  321. #include <boost/test/data/monomorphic.hpp>
  322. auto range1 = data::xrange( (data::step = 0.5, data::end = 3 ) ); // Constructs with named values, starting at 0
  323. auto range2 = data::xrange( begin, end ); // begin < end required
  324. auto range5 = data::xrange( begin, end, step ); // begin < end required
  325. auto range3 = data::xrange( end ); // begin=0, end cannot be <= 0, see above
  326. auto range4 = data::xrange( end, (data::begin=1) ); // named value after end
  327. ``
  328. [tip The named value parameters should be declared inside parenthesis]
  329. [h5 Parameters]
  330. The details of the named value parameters is given in the table below.
  331. [table:id_range_parameter_table Range parameters
  332. [
  333. [Name]
  334. [Default]
  335. [Description]
  336. ]
  337. [
  338. [`begin`]
  339. [0]
  340. [Beginning of the generated sequence. The `begin` value is included in set of values returned
  341. by the generator.
  342. ]
  343. ]
  344. [
  345. [`end`]
  346. [+ infinity]
  347. [End of the generated sequence. The `end` value is not included in set of values returned
  348. by the generator. If omitted, the generator has infinite size.
  349. ]
  350. ]
  351. [
  352. [`step`]
  353. [1]
  354. [Number indicating the step between two consecutive samples of the generated range.
  355. The default type is the same as the input type. This value should not be 0. It should be of the same
  356. sign as `end-begin`.
  357. ]
  358. ]
  359. ]
  360. [bt_example dataset_example59..Declaring a test with a range..run-fail]
  361. [/ ################################################################################################################################## ]
  362. [h4:random Random value dataset]
  363. This type of dataset generates a sequence of random numbers following a given /distribution/. The /seed/ and the /engine/ may also be
  364. specified.
  365. [caution The random value generator is available only for C++11 capable compilers. If this feature is not supported for your compiler,
  366. the macro [macroref BOOST_TEST_NO_RANDOM_DATASET_AVAILABLE `BOOST_TEST_NO_RANDOM_DATASET_AVAILABLE`]
  367. will be automatically set by the __UTF__]
  368. It is possible to construct a random sequence using the factory [funcref boost::unit_test::data::random], available in the overloads below:
  369. ``
  370. auto rdgen = random(); // uniform distribution (real) on [0, 1)
  371. auto rdgen = random(1, 17); // uniform distribution (integer) on [1, 17]
  372. // Default random generator engine, Gaussian distribution (mean=5, sigma=2) and seed set to 100.
  373. auto rdgen = random( (data::seed = 100UL,
  374. data::distribution = std::normal_distribution<>(5.,2)) );
  375. ``
  376. Since the generated datasets will have infinite size, the sequence size should be narrowed by combining the dataset with another
  377. one through e.g. a /zip/ operation.
  378. [tip In order to be able to reproduce a failure within a randomized parameter test case, the seed that generated the failure may be
  379. set in order to generate the same sequence of random values.]
  380. [h5 Parameters]
  381. The details of the named value parameters is given in the table below.
  382. [table:id_range_parameter_table Range parameters
  383. [
  384. [Parameter name]
  385. [Default]
  386. [Description]
  387. ]
  388. [
  389. [`seed`]
  390. [(not set)]
  391. [Seed for the generation of the random sequence.]
  392. ]
  393. [
  394. [`distribution`]
  395. [Uniform]
  396. [Distribution instance for generating the random number sequences. The `end` value is not included in set of values returned
  397. by the generator for real values, and is included for integers. ]
  398. ]
  399. [
  400. [`engine`]
  401. [`std::default_random_engine`]
  402. [Random number generator engine.]
  403. ]
  404. ]
  405. [bt_example dataset_example63..Declaring a test with a random sequence..run-fail]
  406. [endsect] [/ Datasets generators]
  407. [endsect]