@@ -60,7 +60,7 @@ but a header-only Boost license version is always available (if somewhat slower)
60
60
61
61
Should you just wish to 'cut to the chase' just to get bigger integers and/or bigger and more precise reals as simply and portably as possible,
62
62
close to 'drop-in' replacements for the __fundamental_type analogs,
63
- then use a fully Boost-licensed number type, and skip to one of more of :
63
+ then use a fully Boost-licensed number type, and skip to one or more of:
64
64
65
65
* __cpp_int for multiprecision integers,
66
66
* __cpp_rational for rational types,
@@ -133,8 +133,8 @@ Conversions are also allowed:
133
133
134
134
However conversions that are inherently lossy are either declared explicit or else forbidden altogether:
135
135
136
- d = 3.14; // Error implicit conversion from double not allowed.
137
- d = static_cast<mp::int512_t>(3.14); // OK explicit construction is allowed
136
+ d = 3.14; // Error, implicit conversion from double not allowed.
137
+ d = static_cast<mp::int512_t>(3.14); // OK, explicit construction is allowed
138
138
139
139
Mixed arithmetic will fail if the conversion is either ambiguous or explicit:
140
140
@@ -195,9 +195,9 @@ of references to the arguments of the function, plus some compile-time informati
195
195
is.
196
196
197
197
The great advantage of this method is the ['elimination of temporaries]: for example, the "naive" implementation
198
- of `operator*` above, requires one temporary for computing the result, and at least another one to return it. It's true
198
+ of `operator*` above requires one temporary for computing the result, and at least another one to return it. It's true
199
199
that sometimes this overhead can be reduced by using move-semantics, but it can't be eliminated completely. For example,
200
- lets suppose we're evaluating a polynomial via Horner's method, something like this:
200
+ let's suppose we're evaluating a polynomial via Horner's method, something like this:
201
201
202
202
T a[7] = { /* some values */ };
203
203
//....
@@ -206,7 +206,7 @@ lets suppose we're evaluating a polynomial via Horner's method, something like t
206
206
If type `T` is a `number`, then this expression is evaluated ['without creating a single temporary value]. In contrast,
207
207
if we were using the [mpfr_class] C++ wrapper for [mpfr] - then this expression would result in no less than 11
208
208
temporaries (this is true even though [mpfr_class] does use expression templates to reduce the number of temporaries somewhat). Had
209
- we used an even simpler wrapper around [mpfr] like [mpreal] things would have been even worse and no less that 24 temporaries
209
+ we used an even simpler wrapper around [mpfr] like [mpreal] things would have been even worse and no less than 24 temporaries
210
210
are created for this simple expression (note - we actually measure the number of memory allocations performed rather than
211
211
the number of temporaries directly, note also that the [mpf_class] wrapper supplied with GMP-5.1 or later reduces the number of
212
212
temporaries to pretty much zero). Note that if we compile with expression templates disabled and rvalue-reference support
@@ -247,7 +247,7 @@ is created in this case.
247
247
248
248
Given the comments above, you might be forgiven for thinking that expression-templates are some kind of universal-panacea:
249
249
sadly though, all tricks like this have their downsides. For one thing, expression template libraries
250
- like this one, tend to be slower to compile than their simpler cousins, they're also harder to debug
250
+ like this one tend to be slower to compile than their simpler cousins, they're also harder to debug
251
251
(should you actually want to step through our code!), and rely on compiler optimizations being turned
252
252
on to give really good performance. Also, since the return type from expressions involving `number`s
253
253
is an "unmentionable implementation detail", you have to be careful to cast the result of an expression
@@ -256,23 +256,23 @@ to the actual number type when passing an expression to a template function. Fo
256
256
template <class T>
257
257
void my_proc(const T&);
258
258
259
- Then calling:
259
+ Then calling
260
260
261
261
my_proc(a+b);
262
262
263
- Will very likely result in obscure error messages inside the body of `my_proc` - since we've passed it
263
+ will very likely result in obscure error messages inside the body of `my_proc` - since we've passed it
264
264
an expression template type, and not a number type. Instead we probably need:
265
265
266
266
my_proc(my_number_type(a+b));
267
267
268
268
Having said that, these situations don't occur that often - or indeed not at all for non-template functions.
269
269
In addition, all the functions in the Boost.Math library will automatically convert expression-template arguments
270
- to the underlying number type without you having to do anything, so:
270
+ to the underlying number type without you having to do anything, so
271
271
272
272
mpfr_float_100 a(20), delta(0.125);
273
273
boost::math::gamma_p(a, a + delta);
274
274
275
- Will work just fine, with the `a + delta` expression template argument getting converted to an `mpfr_float_100`
275
+ will work just fine, with the `a + delta` expression template argument getting converted to an `mpfr_float_100`
276
276
internally by the Boost.Math library.
277
277
278
278
[caution In C++11 you should never store an expression template using:
@@ -299,7 +299,7 @@ dramatic as the reduction in number of temporaries would suggest. For example,
299
299
we see the following typical results for polynomial execution:
300
300
301
301
[table Evaluation of Order 6 Polynomial.
302
- [[Library] [Relative Time] [Relative number of memory allocations ]]
302
+ [[Library] [Relative Time] [Relative Number of Memory Allocations ]]
303
303
[[number] [1.0 (0.00957s)] [1.0 (2996 total)]]
304
304
[[[mpfr_class]] [1.1 (0.0102s)] [4.3 (12976 total)]]
305
305
[[[mpreal]] [1.6 (0.0151s)] [9.3 (27947 total)]]
@@ -311,13 +311,13 @@ a number of reasons for this:
311
311
* The cost of extended-precision multiplication and division is so great, that the times taken for these tend to
312
312
swamp everything else.
313
313
* The cost of an in-place multiplication (using `operator*=`) tends to be more than an out-of-place
314
- `operator*` (typically `operator *=` has to create a temporary workspace to carry out the multiplication, where
315
- as `operator*` can use the target variable as workspace). Since the expression templates carry out their
314
+ `operator*` (typically `operator *=` has to create a temporary workspace to carry out the multiplication,
315
+ whereas `operator*` can use the target variable as workspace). Since the expression templates carry out their
316
316
magic by converting out-of-place operators to in-place ones, we necessarily take this hit. Even so the
317
317
transformation is more efficient than creating the extra temporary variable, just not by as much as
318
318
one would hope.
319
319
320
- Finally, note that `number` takes a second template argument, which, when set to `et_off` disables all
320
+ Finally, note that `number` takes a second template argument, which, when set to `et_off`, disables all
321
321
the expression template machinery. The result is much faster to compile, but slower at runtime.
322
322
323
323
We'll conclude this section by providing some more performance comparisons between these three libraries,
0 commit comments