atomic.qbk 48 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279
  1. [/
  2. / Copyright (c) 2009 Helge Bahmann
  3. / Copyright (c) 2014, 2017, 2018 Andrey Semashev
  4. /
  5. / Distributed under the Boost Software License, Version 1.0. (See accompanying
  6. / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
  7. /]
  8. [library Boost.Atomic
  9. [quickbook 1.4]
  10. [authors [Bahmann, Helge][Semashev, Andrey]]
  11. [copyright 2011 Helge Bahmann]
  12. [copyright 2012 Tim Blechmann]
  13. [copyright 2013, 2017, 2018 Andrey Semashev]
  14. [id atomic]
  15. [dirname atomic]
  16. [purpose Atomic operations]
  17. [license
  18. Distributed under the Boost Software License, Version 1.0.
  19. (See accompanying file LICENSE_1_0.txt or copy at
  20. [@http://www.boost.org/LICENSE_1_0.txt])
  21. ]
  22. ]
  23. [section:introduction Introduction]
  24. [section:introduction_presenting Presenting Boost.Atomic]
  25. [*Boost.Atomic] is a library that provides [^atomic]
  26. data types and operations on these data types, as well as memory
  27. ordering constraints required for coordinating multiple threads through
  28. atomic variables. It implements the interface as defined by the C++11
  29. standard, but makes this feature available for platforms lacking
  30. system/compiler support for this particular C++11 feature.
  31. Users of this library should already be familiar with concurrency
  32. in general, as well as elementary concepts such as "mutual exclusion".
  33. The implementation makes use of processor-specific instructions where
  34. possible (via inline assembler, platform libraries or compiler
  35. intrinsics), and falls back to "emulating" atomic operations through
  36. locking.
  37. [endsect]
  38. [section:introduction_purpose Purpose]
  39. Operations on "ordinary" variables are not guaranteed to be atomic.
  40. This means that with [^int n=0] initially, two threads concurrently
  41. executing
  42. [c++]
  43. void function()
  44. {
  45. n ++;
  46. }
  47. might result in [^n==1] instead of 2: Each thread will read the
  48. old value into a processor register, increment it and write the result
  49. back. Both threads may therefore write [^1], unaware that the other thread
  50. is doing likewise.
  51. Declaring [^atomic<int> n=0] instead, the same operation on
  52. this variable will always result in [^n==2] as each operation on this
  53. variable is ['atomic]: This means that each operation behaves as if it
  54. were strictly sequentialized with respect to the other.
  55. Atomic variables are useful for two purposes:
  56. * as a means for coordinating multiple threads via custom
  57. coordination protocols
  58. * as faster alternatives to "locked" access to simple variables
  59. Take a look at the [link atomic.usage_examples examples] section
  60. for common patterns.
  61. [endsect]
  62. [endsect]
  63. [section:thread_coordination Thread coordination using Boost.Atomic]
  64. The most common use of [*Boost.Atomic] is to realize custom
  65. thread synchronization protocols: The goal is to coordinate
  66. accesses of threads to shared variables in order to avoid
  67. "conflicts". The
  68. programmer must be aware of the fact that
  69. compilers, CPUs and the cache
  70. hierarchies may generally reorder memory references at will.
  71. As a consequence a program such as:
  72. [c++]
  73. int x = 0, int y = 0;
  74. thread1:
  75. x = 1;
  76. y = 1;
  77. thread2
  78. if (y == 1) {
  79. assert(x == 1);
  80. }
  81. might indeed fail as there is no guarantee that the read of `x`
  82. by thread2 "sees" the write by thread1.
  83. [*Boost.Atomic] uses a synchronisation concept based on the
  84. ['happens-before] relation to describe the guarantees under
  85. which situations such as the above one cannot occur.
  86. The remainder of this section will discuss ['happens-before] in
  87. a "hands-on" way instead of giving a fully formalized definition.
  88. The reader is encouraged to additionally have a
  89. look at the discussion of the correctness of a few of the
  90. [link atomic.usage_examples examples] afterwards.
  91. [section:mutex Enforcing ['happens-before] through mutual exclusion]
  92. As an introductory example to understand how arguing using
  93. ['happens-before] works, consider two threads synchronizing
  94. using a common mutex:
  95. [c++]
  96. mutex m;
  97. thread1:
  98. m.lock();
  99. ... /* A */
  100. m.unlock();
  101. thread2:
  102. m.lock();
  103. ... /* B */
  104. m.unlock();
  105. The "lockset-based intuition" would be to argue that A and B
  106. cannot be executed concurrently as the code paths require a
  107. common lock to be held.
  108. One can however also arrive at the same conclusion using
  109. ['happens-before]: Either thread1 or thread2 will succeed first
  110. at [^m.lock()]. If this is be thread1, then as a consequence,
  111. thread2 cannot succeed at [^m.lock()] before thread1 has executed
  112. [^m.unlock()], consequently A ['happens-before] B in this case.
  113. By symmetry, if thread2 succeeds at [^m.lock()] first, we can
  114. conclude B ['happens-before] A.
  115. Since this already exhausts all options, we can conclude that
  116. either A ['happens-before] B or B ['happens-before] A must
  117. always hold. Obviously cannot state ['which] of the two relationships
  118. holds, but either one is sufficient to conclude that A and B
  119. cannot conflict.
  120. Compare the [link boost_atomic.usage_examples.example_spinlock spinlock]
  121. implementation to see how the mutual exclusion concept can be
  122. mapped to [*Boost.Atomic].
  123. [endsect]
  124. [section:release_acquire ['happens-before] through [^release] and [^acquire]]
  125. The most basic pattern for coordinating threads via [*Boost.Atomic]
  126. uses [^release] and [^acquire] on an atomic variable for coordination: If ...
  127. * ... thread1 performs an operation A,
  128. * ... thread1 subsequently writes (or atomically
  129. modifies) an atomic variable with [^release] semantic,
  130. * ... thread2 reads (or atomically reads-and-modifies)
  131. the value this value from the same atomic variable with
  132. [^acquire] semantic and
  133. * ... thread2 subsequently performs an operation B,
  134. ... then A ['happens-before] B.
  135. Consider the following example
  136. [c++]
  137. atomic<int> a(0);
  138. thread1:
  139. ... /* A */
  140. a.fetch_add(1, memory_order_release);
  141. thread2:
  142. int tmp = a.load(memory_order_acquire);
  143. if (tmp == 1) {
  144. ... /* B */
  145. } else {
  146. ... /* C */
  147. }
  148. In this example, two avenues for execution are possible:
  149. * The [^store] operation by thread1 precedes the [^load] by thread2:
  150. In this case thread2 will execute B and "A ['happens-before] B"
  151. holds as all of the criteria above are satisfied.
  152. * The [^load] operation by thread2 precedes the [^store] by thread1:
  153. In this case, thread2 will execute C, but "A ['happens-before] C"
  154. does ['not] hold: thread2 does not read the value written by
  155. thread1 through [^a].
  156. Therefore, A and B cannot conflict, but A and C ['can] conflict.
  157. [endsect]
  158. [section:fences Fences]
  159. Ordering constraints are generally specified together with an access to
  160. an atomic variable. It is however also possible to issue "fence"
  161. operations in isolation, in this case the fence operates in
  162. conjunction with preceding (for `acquire`, `consume` or `seq_cst`
  163. operations) or succeeding (for `release` or `seq_cst`) atomic
  164. operations.
  165. The example from the previous section could also be written in
  166. the following way:
  167. [c++]
  168. atomic<int> a(0);
  169. thread1:
  170. ... /* A */
  171. atomic_thread_fence(memory_order_release);
  172. a.fetch_add(1, memory_order_relaxed);
  173. thread2:
  174. int tmp = a.load(memory_order_relaxed);
  175. if (tmp == 1) {
  176. atomic_thread_fence(memory_order_acquire);
  177. ... /* B */
  178. } else {
  179. ... /* C */
  180. }
  181. This provides the same ordering guarantees as previously, but
  182. elides a (possibly expensive) memory ordering operation in
  183. the case C is executed.
  184. [endsect]
  185. [section:release_consume ['happens-before] through [^release] and [^consume]]
  186. The second pattern for coordinating threads via [*Boost.Atomic]
  187. uses [^release] and [^consume] on an atomic variable for coordination: If ...
  188. * ... thread1 performs an operation A,
  189. * ... thread1 subsequently writes (or atomically modifies) an
  190. atomic variable with [^release] semantic,
  191. * ... thread2 reads (or atomically reads-and-modifies)
  192. the value this value from the same atomic variable with [^consume] semantic and
  193. * ... thread2 subsequently performs an operation B that is ['computationally
  194. dependent on the value of the atomic variable],
  195. ... then A ['happens-before] B.
  196. Consider the following example
  197. [c++]
  198. atomic<int> a(0);
  199. complex_data_structure data[2];
  200. thread1:
  201. data[1] = ...; /* A */
  202. a.store(1, memory_order_release);
  203. thread2:
  204. int index = a.load(memory_order_consume);
  205. complex_data_structure tmp = data[index]; /* B */
  206. In this example, two avenues for execution are possible:
  207. * The [^store] operation by thread1 precedes the [^load] by thread2:
  208. In this case thread2 will read [^data\[1\]] and "A ['happens-before] B"
  209. holds as all of the criteria above are satisfied.
  210. * The [^load] operation by thread2 precedes the [^store] by thread1:
  211. In this case thread2 will read [^data\[0\]] and "A ['happens-before] B"
  212. does ['not] hold: thread2 does not read the value written by
  213. thread1 through [^a].
  214. Here, the ['happens-before] relationship helps ensure that any
  215. accesses (presumable writes) to [^data\[1\]] by thread1 happen before
  216. before the accesses (presumably reads) to [^data\[1\]] by thread2:
  217. Lacking this relationship, thread2 might see stale/inconsistent
  218. data.
  219. Note that in this example, the fact that operation B is computationally
  220. dependent on the atomic variable, therefore the following program would
  221. be erroneous:
  222. [c++]
  223. atomic<int> a(0);
  224. complex_data_structure data[2];
  225. thread1:
  226. data[1] = ...; /* A */
  227. a.store(1, memory_order_release);
  228. thread2:
  229. int index = a.load(memory_order_consume);
  230. complex_data_structure tmp;
  231. if (index == 0)
  232. tmp = data[0];
  233. else
  234. tmp = data[1];
  235. [^consume] is most commonly (and most safely! see
  236. [link atomic.limitations limitations]) used with
  237. pointers, compare for example the
  238. [link boost_atomic.usage_examples.singleton singleton with double-checked locking].
  239. [endsect]
  240. [section:seq_cst Sequential consistency]
  241. The third pattern for coordinating threads via [*Boost.Atomic]
  242. uses [^seq_cst] for coordination: If ...
  243. * ... thread1 performs an operation A,
  244. * ... thread1 subsequently performs any operation with [^seq_cst],
  245. * ... thread1 subsequently performs an operation B,
  246. * ... thread2 performs an operation C,
  247. * ... thread2 subsequently performs any operation with [^seq_cst],
  248. * ... thread2 subsequently performs an operation D,
  249. then either "A ['happens-before] D" or "C ['happens-before] B" holds.
  250. In this case it does not matter whether thread1 and thread2 operate
  251. on the same or different atomic variables, or use a "stand-alone"
  252. [^atomic_thread_fence] operation.
  253. [endsect]
  254. [endsect]
  255. [section:interface Programming interfaces]
  256. [section:configuration Configuration and building]
  257. The library contains header-only and compiled parts. The library is
  258. header-only for lock-free cases but requires a separate binary to
  259. implement the lock-based emulation. Users are able to detect whether
  260. linking to the compiled part is required by checking the
  261. [link atomic.interface.feature_macros feature macros].
  262. The following macros affect library behavior:
  263. [table
  264. [[Macro] [Description]]
  265. [[`BOOST_ATOMIC_NO_CMPXCHG8B`] [Affects 32-bit x86 Oracle Studio builds. When defined,
  266. the library assumes the target CPU does not support `cmpxchg8b` instruction used
  267. to support 64-bit atomic operations. This is the case with very old CPUs (pre-Pentium).
  268. The library does not perform runtime detection of this instruction, so running the code
  269. that uses 64-bit atomics on such CPUs will result in crashes, unless this macro is defined.
  270. Note that the macro does not affect MSVC, GCC and compatible compilers because the library infers
  271. this information from the compiler-defined macros.]]
  272. [[`BOOST_ATOMIC_NO_CMPXCHG16B`] [Affects 64-bit x86 MSVC and Oracle Studio builds. When defined,
  273. the library assumes the target CPU does not support `cmpxchg16b` instruction used
  274. to support 128-bit atomic operations. This is the case with some early 64-bit AMD CPUs,
  275. all Intel CPUs and current AMD CPUs support this instruction. The library does not
  276. perform runtime detection of this instruction, so running the code that uses 128-bit
  277. atomics on such CPUs will result in crashes, unless this macro is defined. Note that
  278. the macro does not affect GCC and compatible compilers because the library infers
  279. this information from the compiler-defined macros.]]
  280. [[`BOOST_ATOMIC_NO_MFENCE`] [Affects 32-bit x86 Oracle Studio builds. When defined,
  281. the library assumes the target CPU does not support `mfence` instruction used
  282. to implement thread fences. This instruction was added with SSE2 instruction set extension,
  283. which was available in CPUs since Intel Pentium 4. The library does not perform runtime detection
  284. of this instruction, so running the library code on older CPUs will result in crashes, unless
  285. this macro is defined. Note that the macro does not affect MSVC, GCC and compatible compilers
  286. because the library infers this information from the compiler-defined macros.]]
  287. [[`BOOST_ATOMIC_NO_FLOATING_POINT`] [When defined, support for floating point operations is disabled.
  288. Floating point types shall be treated similar to trivially copyable structs and no capability macros
  289. will be defined.]]
  290. [[`BOOST_ATOMIC_FORCE_FALLBACK`] [When defined, all operations are implemented with locks.
  291. This is mostly used for testing and should not be used in real world projects.]]
  292. [[`BOOST_ATOMIC_DYN_LINK` and `BOOST_ALL_DYN_LINK`] [Control library linking. If defined,
  293. the library assumes dynamic linking, otherwise static. The latter macro affects all Boost
  294. libraries, not just [*Boost.Atomic].]]
  295. [[`BOOST_ATOMIC_NO_LIB` and `BOOST_ALL_NO_LIB`] [Control library auto-linking on Windows.
  296. When defined, disables auto-linking. The latter macro affects all Boost libraries,
  297. not just [*Boost.Atomic].]]
  298. ]
  299. Besides macros, it is important to specify the correct compiler options for the target CPU.
  300. With GCC and compatible compilers this affects whether particular atomic operations are
  301. lock-free or not.
  302. Boost building process is described in the [@http://www.boost.org/doc/libs/release/more/getting_started/ Getting Started guide].
  303. For example, you can build [*Boost.Atomic] with the following command line:
  304. [pre
  305. bjam --with-atomic variant=release instruction-set=core2 stage
  306. ]
  307. [endsect]
  308. [section:interface_memory_order Memory order]
  309. #include <boost/memory_order.hpp>
  310. The enumeration [^boost::memory_order] defines the following
  311. values to represent memory ordering constraints:
  312. [table
  313. [[Constant] [Description]]
  314. [[`memory_order_relaxed`] [No ordering constraint.
  315. Informally speaking, following operations may be reordered before,
  316. preceding operations may be reordered after the atomic
  317. operation. This constraint is suitable only when
  318. either a) further operations do not depend on the outcome
  319. of the atomic operation or b) ordering is enforced through
  320. stand-alone `atomic_thread_fence` operations. The operation on
  321. the atomic value itself is still atomic though.
  322. ]]
  323. [[`memory_order_release`] [
  324. Perform `release` operation. Informally speaking,
  325. prevents all preceding memory operations to be reordered
  326. past this point.
  327. ]]
  328. [[`memory_order_acquire`] [
  329. Perform `acquire` operation. Informally speaking,
  330. prevents succeeding memory operations to be reordered
  331. before this point.
  332. ]]
  333. [[`memory_order_consume`] [
  334. Perform `consume` operation. More relaxed (and
  335. on some architectures more efficient) than `memory_order_acquire`
  336. as it only affects succeeding operations that are
  337. computationally-dependent on the value retrieved from
  338. an atomic variable.
  339. ]]
  340. [[`memory_order_acq_rel`] [Perform both `release` and `acquire` operation]]
  341. [[`memory_order_seq_cst`] [
  342. Enforce sequential consistency. Implies `memory_order_acq_rel`, but
  343. additionally enforces total order for all operations such qualified.
  344. ]]
  345. ]
  346. For compilers that support C++11 scoped enums, the library also defines scoped synonyms
  347. that are preferred in modern programs:
  348. [table
  349. [[Pre-C++11 constant] [C++11 equivalent]]
  350. [[`memory_order_relaxed`] [`memory_order::relaxed`]]
  351. [[`memory_order_release`] [`memory_order::release`]]
  352. [[`memory_order_acquire`] [`memory_order::acquire`]]
  353. [[`memory_order_consume`] [`memory_order::consume`]]
  354. [[`memory_order_acq_rel`] [`memory_order::acq_rel`]]
  355. [[`memory_order_seq_cst`] [`memory_order::seq_cst`]]
  356. ]
  357. See section [link atomic.thread_coordination ['happens-before]] for explanation
  358. of the various ordering constraints.
  359. [endsect]
  360. [section:interface_atomic_flag Atomic flags]
  361. #include <boost/atomic/atomic_flag.hpp>
  362. The `boost::atomic_flag` type provides the most basic set of atomic operations
  363. suitable for implementing mutually exclusive access to thread-shared data. The flag
  364. can have one of the two possible states: set and clear. The class implements the
  365. following operations:
  366. [table
  367. [[Syntax] [Description]]
  368. [
  369. [`atomic_flag()`]
  370. [Initialize to the clear state. See the discussion below.]
  371. ]
  372. [
  373. [`bool test_and_set(memory_order order)`]
  374. [Sets the atomic flag to the set state; returns `true` if the flag had been set prior to the operation]
  375. ]
  376. [
  377. [`void clear(memory_order order)`]
  378. [Sets the atomic flag to the clear state]
  379. ]
  380. ]
  381. `order` always has `memory_order_seq_cst` as default parameter.
  382. Note that the default constructor `atomic_flag()` is unlike `std::atomic_flag`, which
  383. leaves the default-constructed object uninitialized. This potentially requires dynamic
  384. initialization during the program startup to perform the object initialization, which
  385. makes it unsafe to create global `boost::atomic_flag` objects that can be used before
  386. entring `main()`. Some compilers though (especially those supporting C++11 `constexpr`)
  387. may be smart enough to perform flag initialization statically (which is, in C++11 terms,
  388. a constant initialization).
  389. This difference is deliberate and is done to support C++03 compilers. C++11 defines the
  390. `ATOMIC_FLAG_INIT` macro which can be used to statically initialize `std::atomic_flag`
  391. to a clear state like this:
  392. std::atomic_flag flag = ATOMIC_FLAG_INIT; // constant initialization
  393. This macro cannot be implemented in C++03 because for that `atomic_flag` would have to be
  394. an aggregate type, which it cannot be because it has to prohibit copying and consequently
  395. define the default constructor. Thus the closest equivalent C++03 code using [*Boost.Atomic]
  396. would be:
  397. boost::atomic_flag flag; // possibly, dynamic initialization in C++03;
  398. // constant initialization in C++11
  399. The same code is also valid in C++11, so this code can be used universally. However, for
  400. interface parity with `std::atomic_flag`, if possible, the library also defines the
  401. `BOOST_ATOMIC_FLAG_INIT` macro, which is equivalent to `ATOMIC_FLAG_INIT`:
  402. boost::atomic_flag flag = BOOST_ATOMIC_FLAG_INIT; // constant initialization
  403. This macro will only be implemented on a C++11 compiler. When this macro is not available,
  404. the library defines `BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`.
  405. [endsect]
  406. [section:interface_atomic_object Atomic objects]
  407. #include <boost/atomic/atomic.hpp>
  408. [^boost::atomic<['T]>] provides methods for atomically accessing
  409. variables of a suitable type [^['T]]. The type is suitable if
  410. it is /trivially copyable/ (3.9/9 \[basic.types\]). Following are
  411. examples of the types compatible with this requirement:
  412. * a scalar type (e.g. integer, boolean, enum or pointer type)
  413. * a [^class] or [^struct] that has no non-trivial copy or move
  414. constructors or assignment operators, has a trivial destructor,
  415. and that is comparable via [^memcmp].
  416. Note that classes with virtual functions or virtual base classes
  417. do not satisfy the requirements. Also be warned
  418. that structures with "padding" between data members may compare
  419. non-equal via [^memcmp] even though all members are equal. This may also be
  420. the case with some floating point types, which include padding bits themselves.
  421. [section:interface_atomic_generic [^boost::atomic<['T]>] template class]
  422. All atomic objects support the following operations and properties:
  423. [table
  424. [[Syntax] [Description]]
  425. [
  426. [`atomic()`]
  427. [Initialize to an unspecified value]
  428. ]
  429. [
  430. [`atomic(T initial_value)`]
  431. [Initialize to [^initial_value]]
  432. ]
  433. [
  434. [`bool is_lock_free()`]
  435. [Checks if the atomic object is lock-free; the returned value is consistent with the `is_always_lock_free` static constant, see below]
  436. ]
  437. [
  438. [`T load(memory_order order)`]
  439. [Return current value]
  440. ]
  441. [
  442. [`void store(T value, memory_order order)`]
  443. [Write new value to atomic variable]
  444. ]
  445. [
  446. [`T exchange(T new_value, memory_order order)`]
  447. [Exchange current value with `new_value`, returning current value]
  448. ]
  449. [
  450. [`bool compare_exchange_weak(T & expected, T desired, memory_order order)`]
  451. [Compare current value with `expected`, change it to `desired` if matches.
  452. Returns `true` if an exchange has been performed, and always writes the
  453. previous value back in `expected`. May fail spuriously, so must generally be
  454. retried in a loop.]
  455. ]
  456. [
  457. [`bool compare_exchange_weak(T & expected, T desired, memory_order success_order, memory_order failure_order)`]
  458. [Compare current value with `expected`, change it to `desired` if matches.
  459. Returns `true` if an exchange has been performed, and always writes the
  460. previous value back in `expected`. May fail spuriously, so must generally be
  461. retried in a loop.]
  462. ]
  463. [
  464. [`bool compare_exchange_strong(T & expected, T desired, memory_order order)`]
  465. [Compare current value with `expected`, change it to `desired` if matches.
  466. Returns `true` if an exchange has been performed, and always writes the
  467. previous value back in `expected`.]
  468. ]
  469. [
  470. [`bool compare_exchange_strong(T & expected, T desired, memory_order success_order, memory_order failure_order))`]
  471. [Compare current value with `expected`, change it to `desired` if matches.
  472. Returns `true` if an exchange has been performed, and always writes the
  473. previous value back in `expected`.]
  474. ]
  475. [
  476. [`static bool is_always_lock_free`]
  477. [This static boolean constant indicates if any atomic object of this type is lock-free]
  478. ]
  479. ]
  480. `order` always has `memory_order_seq_cst` as default parameter.
  481. The `compare_exchange_weak`/`compare_exchange_strong` variants
  482. taking four parameters differ from the three parameter variants
  483. in that they allow a different memory ordering constraint to
  484. be specified in case the operation fails.
  485. In addition to these explicit operations, each
  486. [^atomic<['T]>] object also supports
  487. implicit [^store] and [^load] through the use of "assignment"
  488. and "conversion to [^T]" operators. Avoid using these operators,
  489. as they do not allow to specify a memory ordering
  490. constraint which always defaults to `memory_order_seq_cst`.
  491. [endsect]
  492. [section:interface_atomic_integral [^boost::atomic<['integral]>] template class]
  493. In addition to the operations listed in the previous section,
  494. [^boost::atomic<['I]>] for integral
  495. types [^['I]], except `bool`, supports the following operations,
  496. which correspond to [^std::atomic<['I]>]:
  497. [table
  498. [[Syntax] [Description]]
  499. [
  500. [`I fetch_add(I v, memory_order order)`]
  501. [Add `v` to variable, returning previous value]
  502. ]
  503. [
  504. [`I fetch_sub(I v, memory_order order)`]
  505. [Subtract `v` from variable, returning previous value]
  506. ]
  507. [
  508. [`I fetch_and(I v, memory_order order)`]
  509. [Apply bit-wise "and" with `v` to variable, returning previous value]
  510. ]
  511. [
  512. [`I fetch_or(I v, memory_order order)`]
  513. [Apply bit-wise "or" with `v` to variable, returning previous value]
  514. ]
  515. [
  516. [`I fetch_xor(I v, memory_order order)`]
  517. [Apply bit-wise "xor" with `v` to variable, returning previous value]
  518. ]
  519. ]
  520. Additionally, as a [*Boost.Atomic] extension, the following operations are also provided:
  521. [table
  522. [[Syntax] [Description]]
  523. [
  524. [`I fetch_negate(memory_order order)`]
  525. [Change the sign of the value stored in the variable, returning previous value]
  526. ]
  527. [
  528. [`I fetch_complement(memory_order order)`]
  529. [Set the variable to the one\'s complement of the current value, returning previous value]
  530. ]
  531. [
  532. [`I negate(memory_order order)`]
  533. [Change the sign of the value stored in the variable, returning the result]
  534. ]
  535. [
  536. [`I add(I v, memory_order order)`]
  537. [Add `v` to variable, returning the result]
  538. ]
  539. [
  540. [`I sub(I v, memory_order order)`]
  541. [Subtract `v` from variable, returning the result]
  542. ]
  543. [
  544. [`I bitwise_and(I v, memory_order order)`]
  545. [Apply bit-wise "and" with `v` to variable, returning the result]
  546. ]
  547. [
  548. [`I bitwise_or(I v, memory_order order)`]
  549. [Apply bit-wise "or" with `v` to variable, returning the result]
  550. ]
  551. [
  552. [`I bitwise_xor(I v, memory_order order)`]
  553. [Apply bit-wise "xor" with `v` to variable, returning the result]
  554. ]
  555. [
  556. [`I bitwise_complement(memory_order order)`]
  557. [Set the variable to the one\'s complement of the current value, returning the result]
  558. ]
  559. [
  560. [`void opaque_negate(memory_order order)`]
  561. [Change the sign of the value stored in the variable, returning nothing]
  562. ]
  563. [
  564. [`void opaque_add(I v, memory_order order)`]
  565. [Add `v` to variable, returning nothing]
  566. ]
  567. [
  568. [`void opaque_sub(I v, memory_order order)`]
  569. [Subtract `v` from variable, returning nothing]
  570. ]
  571. [
  572. [`void opaque_and(I v, memory_order order)`]
  573. [Apply bit-wise "and" with `v` to variable, returning nothing]
  574. ]
  575. [
  576. [`void opaque_or(I v, memory_order order)`]
  577. [Apply bit-wise "or" with `v` to variable, returning nothing]
  578. ]
  579. [
  580. [`void opaque_xor(I v, memory_order order)`]
  581. [Apply bit-wise "xor" with `v` to variable, returning nothing]
  582. ]
  583. [
  584. [`void opaque_complement(memory_order order)`]
  585. [Set the variable to the one\'s complement of the current value, returning nothing]
  586. ]
  587. [
  588. [`bool negate_and_test(memory_order order)`]
  589. [Change the sign of the value stored in the variable, returning `true` if the result is non-zero and `false` otherwise]
  590. ]
  591. [
  592. [`bool add_and_test(I v, memory_order order)`]
  593. [Add `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
  594. ]
  595. [
  596. [`bool sub_and_test(I v, memory_order order)`]
  597. [Subtract `v` from variable, returning `true` if the result is non-zero and `false` otherwise]
  598. ]
  599. [
  600. [`bool and_and_test(I v, memory_order order)`]
  601. [Apply bit-wise "and" with `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
  602. ]
  603. [
  604. [`bool or_and_test(I v, memory_order order)`]
  605. [Apply bit-wise "or" with `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
  606. ]
  607. [
  608. [`bool xor_and_test(I v, memory_order order)`]
  609. [Apply bit-wise "xor" with `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
  610. ]
  611. [
  612. [`bool complement_and_test(memory_order order)`]
  613. [Set the variable to the one\'s complement of the current value, returning `true` if the result is non-zero and `false` otherwise]
  614. ]
  615. [
  616. [`bool bit_test_and_set(unsigned int n, memory_order order)`]
  617. [Set bit number `n` in the variable to 1, returning `true` if the bit was previously set to 1 and `false` otherwise]
  618. ]
  619. [
  620. [`bool bit_test_and_reset(unsigned int n, memory_order order)`]
  621. [Set bit number `n` in the variable to 0, returning `true` if the bit was previously set to 1 and `false` otherwise]
  622. ]
  623. [
  624. [`bool bit_test_and_complement(unsigned int n, memory_order order)`]
  625. [Change bit number `n` in the variable to the opposite value, returning `true` if the bit was previously set to 1 and `false` otherwise]
  626. ]
  627. ]
  628. [note In Boost.Atomic 1.66 the [^['op]_and_test] operations returned the opposite value (i.e. `true` if the result is zero). This was changed
  629. to the current behavior in 1.67 for consistency with other operations in Boost.Atomic, as well as with conventions taken in the C++ standard library.
  630. Boost.Atomic 1.66 was the only release shipped with the old behavior. Users upgrading from Boost 1.66 to a later release can define
  631. `BOOST_ATOMIC_HIGHLIGHT_OP_AND_TEST` macro when building their code to generate deprecation warnings on the [^['op]_and_test] function calls
  632. (the functions are not actually deprecated though; this is just a way to highlight their use).]
  633. `order` always has `memory_order_seq_cst` as default parameter.
  634. The [^opaque_['op]] and [^['op]_and_test] variants of the operations
  635. may result in a more efficient code on some architectures because
  636. the original value of the atomic variable is not preserved. In the
  637. [^bit_test_and_['op]] operations, the bit number `n` starts from 0, which
  638. means the least significand bit, and must not exceed
  639. [^std::numeric_limits<['I]>::digits - 1].
  640. In addition to these explicit operations, each
  641. [^boost::atomic<['I]>] object also
  642. supports implicit pre-/post- increment/decrement, as well
  643. as the operators `+=`, `-=`, `&=`, `|=` and `^=`.
  644. Avoid using these operators, as they do not allow to specify a memory ordering
  645. constraint which always defaults to `memory_order_seq_cst`.
  646. [endsect]
  647. [section:interface_atomic_floating_point [^boost::atomic<['floating-point]>] template class]
  648. [note The support for floating point types is optional and can be disabled by defining `BOOST_ATOMIC_NO_FLOATING_POINT`.]
  649. In addition to the operations applicable to all atomic objects,
  650. [^boost::atomic<['F]>] for floating point
  651. types [^['F]] supports the following operations,
  652. which correspond to [^std::atomic<['F]>]:
  653. [table
  654. [[Syntax] [Description]]
  655. [
  656. [`F fetch_add(F v, memory_order order)`]
  657. [Add `v` to variable, returning previous value]
  658. ]
  659. [
  660. [`F fetch_sub(F v, memory_order order)`]
  661. [Subtract `v` from variable, returning previous value]
  662. ]
  663. ]
  664. Additionally, as a [*Boost.Atomic] extension, the following operations are also provided:
  665. [table
  666. [[Syntax] [Description]]
  667. [
  668. [`F fetch_negate(memory_order order)`]
  669. [Change the sign of the value stored in the variable, returning previous value]
  670. ]
  671. [
  672. [`F negate(memory_order order)`]
  673. [Change the sign of the value stored in the variable, returning the result]
  674. ]
  675. [
  676. [`F add(F v, memory_order order)`]
  677. [Add `v` to variable, returning the result]
  678. ]
  679. [
  680. [`F sub(F v, memory_order order)`]
  681. [Subtract `v` from variable, returning the result]
  682. ]
  683. [
  684. [`void opaque_negate(memory_order order)`]
  685. [Change the sign of the value stored in the variable, returning nothing]
  686. ]
  687. [
  688. [`void opaque_add(F v, memory_order order)`]
  689. [Add `v` to variable, returning nothing]
  690. ]
  691. [
  692. [`void opaque_sub(F v, memory_order order)`]
  693. [Subtract `v` from variable, returning nothing]
  694. ]
  695. ]
  696. `order` always has `memory_order_seq_cst` as default parameter.
  697. The [^opaque_['op]] variants of the operations
  698. may result in a more efficient code on some architectures because
  699. the original value of the atomic variable is not preserved.
  700. In addition to these explicit operations, each
  701. [^boost::atomic<['F]>] object also supports operators `+=` and `-=`.
  702. Avoid using these operators, as they do not allow to specify a memory ordering
  703. constraint which always defaults to `memory_order_seq_cst`.
  704. When using atomic operations with floating point types, bear in mind that [*Boost.Atomic]
  705. always performs bitwise comparison of the stored values. This means that operations like
  706. `compare_exchange*` may fail if the stored value and comparand have different binary representation,
  707. even if they would normally compare equal. This is typically the case when either of the numbers
  708. is [@https://en.wikipedia.org/wiki/Denormal_number denormalized]. This also means that the behavior
  709. with regard to special floating point values like NaN and signed zero is also different from normal C++.
  710. Another source of the problem is padding bits that are added to some floating point types for alignment.
  711. One widespread example of that is Intel x87 extended double format, which is typically stored as 80 bits
  712. of value padded with 16 or 48 unused bits. These padding bits are often uninitialized and contain garbage,
  713. which makes two equal numbers have different binary representation. The library attempts to account for
  714. the known such cases, but in general it is possible that some platforms are not covered. Note that the C++
  715. standard makes no guarantees about reliability of `compare_exchange*` operations in the face of padding or
  716. trap bits.
  717. [endsect]
  718. [section:interface_atomic_pointer [^boost::atomic<['pointer]>] template class]
  719. In addition to the operations applicable to all atomic objects,
  720. [^boost::atomic<['P]>] for pointer
  721. types [^['P]] (other than pointers to [^void], function or member pointers) support
  722. the following operations, which correspond to [^std::atomic<['P]>]:
  723. [table
  724. [[Syntax] [Description]]
  725. [
  726. [`T fetch_add(ptrdiff_t v, memory_order order)`]
  727. [Add `v` to variable, returning previous value]
  728. ]
  729. [
  730. [`T fetch_sub(ptrdiff_t v, memory_order order)`]
  731. [Subtract `v` from variable, returning previous value]
  732. ]
  733. ]
  734. Similarly to integers, the following [*Boost.Atomic] extensions are also provided:
  735. [table
  736. [[Syntax] [Description]]
  737. [
  738. [`void add(ptrdiff_t v, memory_order order)`]
  739. [Add `v` to variable, returning the result]
  740. ]
  741. [
  742. [`void sub(ptrdiff_t v, memory_order order)`]
  743. [Subtract `v` from variable, returning the result]
  744. ]
  745. [
  746. [`void opaque_add(ptrdiff_t v, memory_order order)`]
  747. [Add `v` to variable, returning nothing]
  748. ]
  749. [
  750. [`void opaque_sub(ptrdiff_t v, memory_order order)`]
  751. [Subtract `v` from variable, returning nothing]
  752. ]
  753. [
  754. [`bool add_and_test(ptrdiff_t v, memory_order order)`]
  755. [Add `v` to variable, returning `true` if the result is non-null and `false` otherwise]
  756. ]
  757. [
  758. [`bool sub_and_test(ptrdiff_t v, memory_order order)`]
  759. [Subtract `v` from variable, returning `true` if the result is non-null and `false` otherwise]
  760. ]
  761. ]
  762. `order` always has `memory_order_seq_cst` as default parameter.
  763. In addition to these explicit operations, each
  764. [^boost::atomic<['P]>] object also
  765. supports implicit pre-/post- increment/decrement, as well
  766. as the operators `+=`, `-=`. Avoid using these operators,
  767. as they do not allow explicit specification of a memory ordering
  768. constraint which always defaults to `memory_order_seq_cst`.
  769. [endsect]
  770. [section:interface_atomic_convenience_typedefs [^boost::atomic<['T]>] convenience typedefs]
  771. For convenience, several shorthand typedefs of [^boost::atomic<['T]>] are provided:
  772. [c++]
  773. typedef atomic< char > atomic_char;
  774. typedef atomic< unsigned char > atomic_uchar;
  775. typedef atomic< signed char > atomic_schar;
  776. typedef atomic< unsigned short > atomic_ushort;
  777. typedef atomic< short > atomic_short;
  778. typedef atomic< unsigned int > atomic_uint;
  779. typedef atomic< int > atomic_int;
  780. typedef atomic< unsigned long > atomic_ulong;
  781. typedef atomic< long > atomic_long;
  782. typedef atomic< unsigned long long > atomic_ullong;
  783. typedef atomic< long long > atomic_llong;
  784. typedef atomic< void* > atomic_address;
  785. typedef atomic< bool > atomic_bool;
  786. typedef atomic< wchar_t > atomic_wchar_t;
  787. typedef atomic< char16_t > atomic_char16_t;
  788. typedef atomic< char32_t > atomic_char32_t;
  789. typedef atomic< uint8_t > atomic_uint8_t;
  790. typedef atomic< int8_t > atomic_int8_t;
  791. typedef atomic< uint16_t > atomic_uint16_t;
  792. typedef atomic< int16_t > atomic_int16_t;
  793. typedef atomic< uint32_t > atomic_uint32_t;
  794. typedef atomic< int32_t > atomic_int32_t;
  795. typedef atomic< uint64_t > atomic_uint64_t;
  796. typedef atomic< int64_t > atomic_int64_t;
  797. typedef atomic< int_least8_t > atomic_int_least8_t;
  798. typedef atomic< uint_least8_t > atomic_uint_least8_t;
  799. typedef atomic< int_least16_t > atomic_int_least16_t;
  800. typedef atomic< uint_least16_t > atomic_uint_least16_t;
  801. typedef atomic< int_least32_t > atomic_int_least32_t;
  802. typedef atomic< uint_least32_t > atomic_uint_least32_t;
  803. typedef atomic< int_least64_t > atomic_int_least64_t;
  804. typedef atomic< uint_least64_t > atomic_uint_least64_t;
  805. typedef atomic< int_fast8_t > atomic_int_fast8_t;
  806. typedef atomic< uint_fast8_t > atomic_uint_fast8_t;
  807. typedef atomic< int_fast16_t > atomic_int_fast16_t;
  808. typedef atomic< uint_fast16_t > atomic_uint_fast16_t;
  809. typedef atomic< int_fast32_t > atomic_int_fast32_t;
  810. typedef atomic< uint_fast32_t > atomic_uint_fast32_t;
  811. typedef atomic< int_fast64_t > atomic_int_fast64_t;
  812. typedef atomic< uint_fast64_t > atomic_uint_fast64_t;
  813. typedef atomic< intmax_t > atomic_intmax_t;
  814. typedef atomic< uintmax_t > atomic_uintmax_t;
  815. typedef atomic< std::size_t > atomic_size_t;
  816. typedef atomic< std::ptrdiff_t > atomic_ptrdiff_t;
  817. typedef atomic< intptr_t > atomic_intptr_t;
  818. typedef atomic< uintptr_t > atomic_uintptr_t;
  819. The typedefs are provided only if the corresponding type is available.
  820. [endsect]
  821. [endsect]
  822. [section:interface_fences Fences]
  823. #include <boost/atomic/fences.hpp>
  824. [table
  825. [[Syntax] [Description]]
  826. [
  827. [`void atomic_thread_fence(memory_order order)`]
  828. [Issue fence for coordination with other threads.]
  829. ]
  830. [
  831. [`void atomic_signal_fence(memory_order order)`]
  832. [Issue fence for coordination with signal handler (only in same thread).]
  833. ]
  834. ]
  835. [endsect]
  836. [section:feature_macros Feature testing macros]
  837. #include <boost/atomic/capabilities.hpp>
  838. [*Boost.Atomic] defines a number of macros to allow compile-time
  839. detection whether an atomic data type is implemented using
  840. "true" atomic operations, or whether an internal "lock" is
  841. used to provide atomicity. The following macros will be
  842. defined to `0` if operations on the data type always
  843. require a lock, to `1` if operations on the data type may
  844. sometimes require a lock, and to `2` if they are always lock-free:
  845. [table
  846. [[Macro] [Description]]
  847. [
  848. [`BOOST_ATOMIC_FLAG_LOCK_FREE`]
  849. [Indicate whether `atomic_flag` is lock-free]
  850. ]
  851. [
  852. [`BOOST_ATOMIC_BOOL_LOCK_FREE`]
  853. [Indicate whether `atomic<bool>` is lock-free]
  854. ]
  855. [
  856. [`BOOST_ATOMIC_CHAR_LOCK_FREE`]
  857. [Indicate whether `atomic<char>` (including signed/unsigned variants) is lock-free]
  858. ]
  859. [
  860. [`BOOST_ATOMIC_CHAR16_T_LOCK_FREE`]
  861. [Indicate whether `atomic<char16_t>` (including signed/unsigned variants) is lock-free]
  862. ]
  863. [
  864. [`BOOST_ATOMIC_CHAR32_T_LOCK_FREE`]
  865. [Indicate whether `atomic<char32_t>` (including signed/unsigned variants) is lock-free]
  866. ]
  867. [
  868. [`BOOST_ATOMIC_WCHAR_T_LOCK_FREE`]
  869. [Indicate whether `atomic<wchar_t>` (including signed/unsigned variants) is lock-free]
  870. ]
  871. [
  872. [`BOOST_ATOMIC_SHORT_LOCK_FREE`]
  873. [Indicate whether `atomic<short>` (including signed/unsigned variants) is lock-free]
  874. ]
  875. [
  876. [`BOOST_ATOMIC_INT_LOCK_FREE`]
  877. [Indicate whether `atomic<int>` (including signed/unsigned variants) is lock-free]
  878. ]
  879. [
  880. [`BOOST_ATOMIC_LONG_LOCK_FREE`]
  881. [Indicate whether `atomic<long>` (including signed/unsigned variants) is lock-free]
  882. ]
  883. [
  884. [`BOOST_ATOMIC_LLONG_LOCK_FREE`]
  885. [Indicate whether `atomic<long long>` (including signed/unsigned variants) is lock-free]
  886. ]
  887. [
  888. [`BOOST_ATOMIC_ADDRESS_LOCK_FREE` or `BOOST_ATOMIC_POINTER_LOCK_FREE`]
  889. [Indicate whether `atomic<T *>` is lock-free]
  890. ]
  891. [
  892. [`BOOST_ATOMIC_THREAD_FENCE`]
  893. [Indicate whether `atomic_thread_fence` function is lock-free]
  894. ]
  895. [
  896. [`BOOST_ATOMIC_SIGNAL_FENCE`]
  897. [Indicate whether `atomic_signal_fence` function is lock-free]
  898. ]
  899. ]
  900. In addition to these standard macros, [*Boost.Atomic] also defines a number of extension macros,
  901. which can also be useful. Like the standard ones, these macros are defined to values `0`, `1` and `2`
  902. to indicate whether the corresponding operations are lock-free or not.
  903. [table
  904. [[Macro] [Description]]
  905. [
  906. [`BOOST_ATOMIC_INT8_LOCK_FREE`]
  907. [Indicate whether `atomic<int8_type>` is lock-free.]
  908. ]
  909. [
  910. [`BOOST_ATOMIC_INT16_LOCK_FREE`]
  911. [Indicate whether `atomic<int16_type>` is lock-free.]
  912. ]
  913. [
  914. [`BOOST_ATOMIC_INT32_LOCK_FREE`]
  915. [Indicate whether `atomic<int32_type>` is lock-free.]
  916. ]
  917. [
  918. [`BOOST_ATOMIC_INT64_LOCK_FREE`]
  919. [Indicate whether `atomic<int64_type>` is lock-free.]
  920. ]
  921. [
  922. [`BOOST_ATOMIC_INT128_LOCK_FREE`]
  923. [Indicate whether `atomic<int128_type>` is lock-free.]
  924. ]
  925. [
  926. [`BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`]
  927. [Defined after including `atomic_flag.hpp`, if the implementation
  928. does not support the `BOOST_ATOMIC_FLAG_INIT` macro for static
  929. initialization of `atomic_flag`. This macro is typically defined
  930. for pre-C++11 compilers.]
  931. ]
  932. ]
  933. In the table above, `intN_type` is a type that fits storage of contiguous `N` bits, suitably aligned for atomic operations.
  934. For floating-point types the following macros are similarly defined:
  935. [table
  936. [[Macro] [Description]]
  937. [
  938. [`BOOST_ATOMIC_FLOAT_LOCK_FREE`]
  939. [Indicate whether `atomic<float>` is lock-free.]
  940. ]
  941. [
  942. [`BOOST_ATOMIC_DOUBLE_LOCK_FREE`]
  943. [Indicate whether `atomic<double>` is lock-free.]
  944. ]
  945. [
  946. [`BOOST_ATOMIC_LONG_DOUBLE_LOCK_FREE`]
  947. [Indicate whether `atomic<long double>` is lock-free.]
  948. ]
  949. ]
  950. These macros are not defined when support for floating point types is disabled by user.
  951. [endsect]
  952. [endsect]
  953. [section:usage_examples Usage examples]
  954. [include examples.qbk]
  955. [endsect]
  956. [/
  957. [section:platform_support Implementing support for additional platforms]
  958. [include platform.qbk]
  959. [endsect]
  960. ]
  961. [/ [xinclude autodoc.xml] ]
  962. [section:limitations Limitations]
  963. While [*Boost.Atomic] strives to implement the atomic operations
  964. from C++11 and later as faithfully as possible, there are a few
  965. limitations that cannot be lifted without compiler support:
  966. * [*Aggregate initialization syntax is not supported]: Since [*Boost.Atomic]
  967. sometimes uses storage type that is different from the value type,
  968. the `atomic<>` template needs an initialization constructor that
  969. performs the necessary conversion. This makes `atomic<>` a non-aggregate
  970. type and prohibits aggregate initialization syntax (`atomic<int> a = {10}`).
  971. [*Boost.Atomic] does support direct and unified initialization syntax though.
  972. [*Advice]: Always use direct initialization (`atomic<int> a(10)`) or unified
  973. initialization (`atomic<int> a{10}`) syntax.
  974. * [*Initializing constructor is not `constexpr` for some types]: For value types
  975. other than integral types and `bool`, `atomic<>` initializing constructor needs
  976. to perform runtime conversion to the storage type. This limitation may be
  977. lifted for more categories of types in the future.
  978. * [*Default constructor is not trivial in C++03]: Because the initializing
  979. constructor has to be defined in `atomic<>`, the default constructor
  980. must also be defined. In C++03 the constructor cannot be defined as defaulted
  981. and therefore it is not trivial. In C++11 the constructor is defaulted (and trivial,
  982. if the default constructor of the value type is). In any case, the default
  983. constructor of `atomic<>` performs default initialization of the atomic value,
  984. as required in C++11. [*Advice]: In C++03, do not use [*Boost.Atomic] in contexts
  985. where trivial default constructor is important (e.g. as a global variable which
  986. is required to be statically initialized).
  987. * [*C++03 compilers may transform computation dependency to control dependency]:
  988. Crucially, `memory_order_consume` only affects computationally-dependent
  989. operations, but in general there is nothing preventing a compiler
  990. from transforming a computation dependency into a control dependency.
  991. A fully compliant C++11 compiler would be forbidden from such a transformation,
  992. but in practice most if not all compilers have chosen to promote
  993. `memory_order_consume` to `memory_order_acquire` instead
  994. (see [@https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59448 this] gcc bug
  995. for example). In the current implementation [*Boost.Atomic] follows that trend,
  996. but this may change in the future.
  997. [*Advice]: In general, avoid `memory_order_consume` and use `memory_order_acquire`
  998. instead. Use `memory_order_consume` only in conjunction with
  999. pointer values, and only if you can ensure that the compiler cannot
  1000. speculate and transform these into control dependencies.
  1001. * [*Fence operations may enforce "too strong" compiler ordering]:
  1002. Semantically, `memory_order_acquire`/`memory_order_consume`
  1003. and `memory_order_release` need to restrain reordering of
  1004. memory operations only in one direction. Since in C++03 there is no
  1005. way to express this constraint to the compiler, these act
  1006. as "full compiler barriers" in C++03 implementation. In corner
  1007. cases this may result in a slightly less efficient code than a C++11 compiler
  1008. could generate. [*Boost.Atomic] will use compiler intrinsics, if possible,
  1009. to express the proper ordering constraints.
  1010. * [*Atomic operations may enforce "too strong" memory ordering in debug mode]:
  1011. On some compilers, disabling optimizations makes it impossible to provide
  1012. memory ordering constraints as compile-time constants to the compiler intrinsics.
  1013. This causes the compiler to silently ignore the provided constraints and choose
  1014. the "strongest" memory order (`memory_order_seq_cst`) to generate code. Not only
  1015. this reduces performance, this may hide bugs in the user's code (e.g. if the user
  1016. used a wrong memory order constraint, which caused a data race).
  1017. [*Advice]: Always test your code with optimizations enabled.
  1018. * [*No interprocess fallback]: using `atomic<T>` in shared memory only works
  1019. correctly, if `atomic<T>::is_lock_free() == true`.
  1020. * [*Signed integers must use [@https://en.wikipedia.org/wiki/Two%27s_complement two's complement]
  1021. representation]: [*Boost.Atomic] makes this requirement in order to implement
  1022. conversions between signed and unsigned integers internally. C++11 requires all
  1023. atomic arithmetic operations on integers to be well defined according to two's complement
  1024. arithmetics, which means that Boost.Atomic has to operate on unsigned integers internally
  1025. to avoid undefined behavior that results from signed integer overflows. Platforms
  1026. with other signed integer representations are not supported.
  1027. [endsect]
  1028. [section:porting Porting]
  1029. [section:unit_tests Unit tests]
  1030. [*Boost.Atomic] provides a unit test suite to verify that the
  1031. implementation behaves as expected:
  1032. * [*fallback_api.cpp] verifies that the fallback-to-locking aspect
  1033. of [*Boost.Atomic] compiles and has correct value semantics.
  1034. * [*native_api.cpp] verifies that all atomic operations have correct
  1035. value semantics (e.g. "fetch_add" really adds the desired value,
  1036. returning the previous). It is a rough "smoke-test" to help weed
  1037. out the most obvious mistakes (for example width overflow,
  1038. signed/unsigned extension, ...).
  1039. * [*lockfree.cpp] verifies that the [*BOOST_ATOMIC_*_LOCKFREE] macros
  1040. are set properly according to the expectations for a given
  1041. platform, and that they match up with the [*is_always_lock_free] and
  1042. [*is_lock_free] members of the [*atomic] object instances.
  1043. * [*atomicity.cpp] lets two threads race against each other modifying
  1044. a shared variable, verifying that the operations behave atomic
  1045. as appropriate. By nature, this test is necessarily stochastic, and
  1046. the test self-calibrates to yield 99% confidence that a
  1047. positive result indicates absence of an error. This test is
  1048. very useful on uni-processor systems with preemption already.
  1049. * [*ordering.cpp] lets two threads race against each other accessing
  1050. multiple shared variables, verifying that the operations
  1051. exhibit the expected ordering behavior. By nature, this test is
  1052. necessarily stochastic, and the test attempts to self-calibrate to
  1053. yield 99% confidence that a positive result indicates absence
  1054. of an error. This only works on true multi-processor (or multi-core)
  1055. systems. It does not yield any result on uni-processor systems
  1056. or emulators (due to there being no observable reordering even
  1057. the order=relaxed case) and will report that fact.
  1058. [endsect]
  1059. [section:tested_compilers Tested compilers]
  1060. [*Boost.Atomic] has been tested on and is known to work on
  1061. the following compilers/platforms:
  1062. * gcc 4.x: i386, x86_64, ppc32, ppc64, sparcv9, armv6, alpha
  1063. * Visual Studio Express 2008/Windows XP, x86, x64, ARM
  1064. [endsect]
  1065. [section:acknowledgements Acknowledgements]
  1066. * Adam Wulkiewicz created the logo used on the [@https://github.com/boostorg/atomic GitHub project page]. The logo was taken from his [@https://github.com/awulkiew/boost-logos collection] of Boost logos.
  1067. [endsect]
  1068. [endsect]