Experilous

While working on the Make It Random library for Unity, I took up an engineering challenge: generation of random floating point numbers between zero and one quickly and with perfect uniform distribution. The most common method, dividing a random integer by the full range of possible integers, has a few substantial flaws that I hoped to avoid:

With the help of various sources around the internet plus some clever use of probability mathematics, I was able to conquer all of these difficulties, providing perfectly uniform and fast generation of floating point numbers in the unit range. This includes all four variants of whether the lower and upper bounds are inclusive or exclusive. The techniques involved are explained below.

The first two are not at all new, but are included because I have not seen them discussed as often as I think they deserve, and because they help build the context for the third technique. This last technique is something I devised on my own, and although I have no doubt other smart people have already discovered it or something similar, I never ran across it while researching, so I’m eager to share it with others in this post. Plus, I suspect that the general technique can be usefully applied to other random value generation beyond just floating point numbers, so the more people are aware of it, the better.

Contents

  1. Clarification on Perfect Uniformity
  2. The Setup
  3. Step 1: Integers to Floating Point Without Division
  4. Step 2: Open Range and the Rejection Method
  5. Step 3: Closed Range and Fancy Probability
  6. High Precision Alternative Distribution
  7. Using the Precise Method to Gain an Extra Bit
  8. Performance Measurements
  9. Full C++ Source for Benchmark

Clarification on Perfect Uniformity

Before I proceed, let me clarify a bit what I mean by the second problem listed above, the issue of non-uniform clumping. Given the way floating point numbers are most commonly represented (according to the IEEE 754 Standard for Floating-Point Arithmetic), there isn’t a nice clean uniformity of values between zero and one. Instead, due to their exponential nature, the density of unique values is higher for ranges closer to zero. This image roughly exemplifies what is happening:

Diagram of floating point clustering between zero and one.

An example of floating point clustering between zero and one.

In this hypothetical format that behaves similarly to IEEE 754 floating point numbers, the overall range is divided into sub-ranges, starting with [0.5, 1) on the right, and reducing to half the size of each prior sub-range as the sub-ranges proceed to the left toward zero. So after the first is [0.25, 0.5), followed by [0.125, 0.25), and so on. This reduction goes on until the exponent cannot get any smaller, when the whole process is finally terminated by the value 0 (ignoring the minor detail of subnormal numbers). And yet, despite the ranges getting smaller and smaller, every single one of them contains the exact same quantity of unique values; eight of them, in the example shown above. Or equivalently, the density keeps getting higher, starting at 16 values per unit range, up to 32, then 64, doubling each time until the capacity of the exponent is exhausted.

When using division to generate a random floating point number between zero and one, this is the sort of distribution that is produced, and it is the result of the non-uniform clumpiness that I mentioned earlier. The reason is that before division, the random integers generated can and should be perfectly uniform. Each possible integer in the target range has an equal probability of being generated as any other. But the division operation maps those integers to unique floating point values in such a way that each unique floating point value will have non-uniform number of integers mapped to it.

Consider if we generate 16-bit integers, in the range [0, 216 = 65536), and map those to the range [0, 1) using the low-precision floating point format shown above. The upper half of the 65536 integers, 32768 of them, will map to the 8 unique values between 0.5 and 1, or 4096 possible integers mapped to each of the 8 final values, resulting in a 4096/65536 chance (6.25%) that any one of these values will be generated. In the second range from the right, [0.25, 0.5), there are only 16384 possible integers, but they still map to 8 unique floating point values. Each one is mapped from 2048 possible integers and has a 2048/65536 chance (3.125%) of being generated. Following the same pattern, each value in the range [0.125, 0.25) has a 1024/65536 chance (1.5625%), followed by 512/65536 (0.78125%) and so on.

The shading in the above images indicates this probability. Many different integers map to the same unique values in the upper ranges, and so those values each have a higher probability of occurring. Very few integers map to same values in the lower ranges, and they each have a lower probability of occurring. The combination of non-uniform density with non-uniform probability allows the overall effect to be consistent uniformity. Low density combined with high probability in the upper ranges perfectly matches the high density and low probability in the lower ranges.

This tends to be good enough for many purposes, but there could be some subtle biases inadvertently introduced into a system because of the heavier clustering of larger numbers and finer distribution of smaller numbers. Instead, I want a distribution of floating point numbers that looks more like the following:

Diagram of uniform distribution of floating point values between zero and one.

An example of a uniform distribution of floating point values between zero and one.

There are trade-offs with this model. The benefit is obviously that uniformity is far more consistent, and applies not just at a high level, but also in the details. Every single unique value that can be produced has an exactly equal probability of being generated as any other. The downside is that there are far fewer unique values possible, and a corresponding loss of absolute precision in some sub-ranges. For example, it is impossible for 0.03125 to be generated by the second model, whereas it is entirely possible in the first. But the first model has a nice symmetry that the second does not: It is also impossible for 0.96875 ( = 1.0 – 0.03125) to be generated in either model, which could be taken as a form of consistency in the second model, but feels sort of inconsistent in the first.

You may sometimes explicitly want the distribution of the first model, maximizing the nuance of numbers near zero, and there is at least one way other than using a division operation to generate such a distribution, so I will come back around to briefly cover that technique at the end. However, the majority of this article is aimed at generating the perfectly uniform model, so let’s dive in!

The Setup

First of all, let me describe the foundation from which I’m starting. I’ll presume we already have a pseudo-random number generator capable of generating high-quality uniformly distributed random bits. This might be something like the Mersenne Twister, or perhaps a more recent PCG or XorShift random engine. You could even use a cryptographically secure engine, though that would defeat the performance objectives of this post. Regardless of the specific engine chosen, these types of engines all tend to produce integers in the range [0, 2n), for some integer n. Just be careful to avoid engines that produce integers in a range whose size is not a power of two, such as a multiply-with-carry engine with 232 – 1 chosen as its base.

I’m going to further suppose we have two functions built on an engine of choice, rand32() and rand64(), which return a uint32_t and uint64_t respectively. (I’ll be working in C/C++ land for this post.) These two functions fully saturate their number types with 32 or 64 fully uniform bits. Some of my implementations below will only use some but not all of those bits, but others will use all 32 or 64 bits to maximize performance, so it is presumed that all bits are available for such use.

One last bit of machinery, I’ll presume two functions for performing type punning from integers to floats and doubles, as_float(uint32_t) and as_double(uint64_t). Depending on your language, and your willingness to violate certain principles of the language and complicate things for your compiler, you might wish to be selective about the method used to perform this process. Here’s an example implementation in C++ using a union; it technically violates the C++ rules and results in the dreaded “undefined behavior”, but it is my understanding that it will nonetheless work reliably on nearly all compilers and platforms.

As the product of this post, I’ll write eight functions making use of rand32() and rand64(). The first four will return 32-bit floats, and I’ll name them rand_float_oo(), rand_float_oc(), rand_float_co(), and rand_float_cc(). The ‘o’ and ‘c’ suffixes stand for “open” and “closed”, referring to whether the lower and upper bounds of the range are exclusive (open) or inclusive (closed). I’ll likewise implement four more for 64-bit doubles.

Step 1: Integers to Floating Point Without Division

The first functions I will implement are rand_float_co() and rand_double_co(), as they only makes use of the first technique that I will explain. These functions generate a floating point number between 0 and 1, including the possibility of 0, but excluding the possibility of 1.

Diagram of the IEEE-754 single precision format, with 1 sign bit, 8 exponent bits, 23 mantissa bits.

The bit layout of the IEEE-754 single precision binary floating point format.

Diagram of the IEEE-754 double precision format, with 1 sign bit, 11 exponent bits, 52 mantissa bits.

The bit layout of the IEEE-754 double precision binary floating point format.

The above diagrams show the bit layout of the single and double precision formats of IEEE 754. The left bit is the sign bit (0 for positive, 1 for negative). Following that are the exponent and mantissa, which roughly follow the formula (1 + M) × 2E, where E is a signed integer, and M is a fixed point number in the range [0, 1). In other words, the mantissa describes the details of the number, while the exponent indicates the magnitude. There are a number of useful ways that these binary formats can be manipulated, either directly as raw bits or interpreted as integers.

It just so happens that for the IEEE 754 binary formats, all floating point numbers from 1 up to but not including 2 have the same constant exponent. That’s exactly the size of range we’re looking for, just offset by 1. And conveniently, if you consider all the 223 or 252 possible bit patterns of the mantissa when paired with this constant exponent, they perfectly map to all the possible floating point values possible between 1 and 2, with perfect uniformity and identical “clumpiness”. By the latter, I mean that the difference between any given number in this range and the next larger representable number is always exactly the same amount.

So to generate a random number, all we have to do is set the sign bit to 0, the exponent to the appropriate constant, and fill the mantissa with random bits. (We can use a shift rather than a mask to get the needed bits in the right place; this is because some random engines tend to have higher quality randomness in the high bits compared to the low bit or bits.) And then to convert that to the desired range [0, 1), we can simply interpret the constructed value as a floating point number in the range [1, 2) and subtract 1 using standard floating point arithmetic.

Conveniently, the range (0, 1] which does not include 0 but does include 1 is nearly as easy to generate. The only difference is that instead of subtracting 1, we’ll subtract from 2. Because the inner floating point generation can generate the value 1, then 2 – 1 = 1, meaning that 1 is a possible value generated by the final computation. But the inner generation can produce floating point numbers close to but not including 2. So, for example, 2 – 1.9999999 = 0.0000001 is really close to 0, but an actual value of 0 is not possible.

Step 2: Open Range and the Rejection Method

Next let us consider the open range (0, 1), from which both 0 and 1 are excluded. In the half-open ranges above, the number of possible values was an integer power of 2, 223 for floats and 252 for doubles. This is why we could get away with simply grabbing the requisite number of bits directly from the random engine, since we know that the individual bits are uniformly distributed.

But with this new range, we’ll have a non-power of 2 possible values, 223 – 1 and 252 – 1, respectively. The typical solution is to add 1 to the randomly generated integer of n bits, and then perform a floating point division by 2n + 1. But as noted at the beginning, this is guaranteed to produce a less than perfectly uniform distribution, because you are mapping 2n – 1 intermediate values to 2m final values, and the final values will be targets for inconsistent quantities of the intermediate values. (Try mapping 24 + 1 = 17 values to 22 = 4 final values; at best, one of the final values will occur 25% more often than each of the other three.)

When dealing with large quantities, this non-uniformity is admittedly quite minor, but it is less than perfect. Besides, it does not work with the bit manipulation shown in the above section, incurring the performance cost of a floating point division instead. An alternative approach, important even when just working with integer ranges of non-power of 2 sizes, is the rejection method. Simply put, random numbers are generated repeatedly and invalid ones are rejected until one is generated that is within the desired range.

Yeah, at first consideration it sounds terrible. Pay the cost of generating potentially multiple random integers, with no definite cap on how many get rejected, plus the cost of a conditional for each iteration. But if the range of the initial random integer is properly controlled, things actually work out quite well under some circumstances, and this is fortunately just about the best circumstance possible.

Consider 32-bit floats first. We can almost use exactly the same process as in float rand_float_co() above. The only thing we need to do is reject the bit pattern of 23 zeros, as that would produce 1.0f as an intermediate float, and consequently a final value of 0.0f after the subtraction. All other bit patterns of 23 bits are entirely valid. This means that the loop conditional is false around 99.99999% of the time. Thanks to branch prediction on modern processors, this will minimize the impact of the conditional on instruction pipelining, keeping things flowing nice and smoothly. For doubles it is even better, with the conditional being false over 99.9999999999999% of the time!

How shall we perform the rejection test? The obvious approach is to just mask off the unwanted bits, and then compare the remaining bits to 0. But that’s two operations per iteration. It just so happens that because we are using the high bits instead of the low bits, we can use a cleverly defined direct comparison operation and skip the mask entirely. Once we have an integer that passes our check, a single shift operation is enough to get the desired bits in the right arrangement, replacing the bit mask entirely. With all the above combined, here are the implementations for our next two functions:

Step 3: Closed Range and Fancy Probability

And now we are ready to tackle the final two functions, generating numbers in the closed range [0, 1], within which both 0 and 1 are included. We might be tempted to implement this function using the same rejection method concept as used above, but in contrast to how the above case was nearly ideal concerning the predictability of the conditional, this case is the exact opposite.

The reason why is that there is one more possible output value, 2n + 1, rather than one less. To utilize the rejection method, we must generate numbers in a larger range than needed, and the next power of 2 is 2(n + 1), which is nearly twice as large as 2n + 1. The consequence is that almost half of our random integers will be rejected, meaning that on average, two random integers need to be generated for every floating point number produced. Further, CPU branch predication becomes worthless, because their is about a 50/50 chance of the branch going either way.

Fortunately, there’s a clever way around this, to turn the worst case into nearly the best case again, if one has some spare bits to work with. And we conveniently do indeed have spare bits to work with: 9 bits when generating floats, since we’re only using 23 out of 32, and 12 with doubles, after subtracting the 52 we’re using from the 64 generated.

So how does this work? We start once again by considering the half-open range [0, 1). All of these are perfectly valid for generating a number in the closed range [0, 1]; the only issue is that 1 itself is missing. So there needs to be a small chance, when generating a number, that 1 will be produced instead of something from the range [0, 1). And that small chance needs to be perfectly balanced so that 1 has an equal likelihood of being generated as any single value from [0, 1).

Let’s consider the 32-bit float case, so that we can work with concrete numbers. There are 223 + 1 possible numbers in our output range [0, 1]. Naturally, that means that each individual value has a 1 in 223 + 1 chance of being generated. Now we can very quickly generate a number in the range [0, 1), and there are exactly 223 numbers in this range, each of which of course has a 1 in 223 chance of being generated. This is almost the probability that we want them to have, but just a tiny bit more likely than it should be.

How can we adjust that probability? We combine it with an additional probability. The likelihood of two independent random events both happening is just the product of each of their individual probabilities. We know the probability of one event: Generating any one of the values in the range [0, 1) is 1 in 223; we’ll label that p. We also know the probability we want: 1 in 223 + 1; we’ll call that r. So what we are looking for is a second random event q such that the probability of both p and q happening together is r:

p q = r 1 2 23 q = 1 2 23 + 1

Solving for q we get:

q = 2 23 2 23 + 1

Given that, here is the concept of our adjusted algorithm thus far: Generate a number in the range [0, 1). Do an additional probability check that has a 223 in 223 + 1 of succeeding. If it succeeds, which it almost always will, return the first random number generated. Otherwise, in that rare 1 in 223 + 1 case, return 1.

Frustratingly, this still obviously requires performing an additional random check. And the probability of this second random check still has that same terribly uncooperative denominator which we were hoping to avoid. But if we further divide this second random check itself into two distinct random checks, one which is super cheap and usually true, and a second one which is expensive but rarely needs to be performed, we can once again get great performance in the common case, and only rarely deal with the nasty stuff. This is where the spare bits mentioned above come into play.

We have 9 spare bits which we can use to perform a check with 29 – 1 in 29 chance of passing (511 in 512 chance). I would say that reasonably counts as “almost always true”, but it’s not quite enough to reach the desired 223 in 223 + 1 probability. And to combine that 9-bit usually-true chance with a second random event such that the overall probability is higher, we would need to use a logical OR relationship, which in probability mathematics becomes a bit more awkward than an AND relationship.

To keep things simple with the math, let us invert the logic a bit. Instead of performing a usually true probability check and only returning 1 if it fails, we will perform a usually false probability check and return 1 in the rare case that it succeeds. The result is that we can use the 9 bits to perform a 1 in 29 check. It’s a small likelihood, but not as small as 1 in 223 + 1 that we are now looking for after inverting the earlier probability. And as intended, making the probability even smaller is as simple as requiring yet another probability check to succeed. Calculating this new probability follows the same process that we did above. Our first probability (p) is 1 in 512; our desired probability (r) is 1 in 223 + 1. We can put that into the same formula pq = r from above:

1 2 9 q = 1 2 23 + 1

Solving this new equation for q we get:

q = 2 9 2 23 + 1

The rearranged and expanded algorithm is as follows: Generate a number in the range [0, 1). Do an additional probability check that has just a 1 in 29 chance of succeeding. If it does succeed, then do a further probability check with a 29 in 223 + 1 chance of succeeding. If it also succeeds, return 1. Otherwise, as soon as either of those two checks fails, return the random number first generated. This ensures that 1 has exactly a 1 in 223 + 1 chance of being returned, which by implication means that all the other values from 0 up to but not including 1 collectively have a 223 in 223 + 1 chance. And since there are exactly 223 values in this range less than 1, and since we know that these already occur with equal probability, we can assert that each one individually has a 1 in 223 + 1 probability of occurring, just like 1.

As for performance, most of the time, the first random check (which just uses the spare 9 bits already generated) will fail, and we can immediately return the random number generated. Only occasionally will we need to perform the awkward 29 in 223 + 1 check. This means that we will only need to generate one random integer under most circumstances. Additionally, branch prediction should work quite well too, since the conditional is highly predictable.

For doubles, everything is much the same, except that the first check is now 1 in 212 (even better!) thanks to having 12 spare bits, and the messy check becomes 212 in 252 + 1.

The only detail remaining is the implementation of the awkward non-power of 2 probability check. That can be done simply enough using the same rejection method concept used earlier with the open range, just in a slightly different code form. I’ll provide this as separate functions, since doing so will simplify the structure of the main functions. I’ll also utilize a bit manipulation trick to get a mask appropriate for minimizing the number of rejections.

With those helper functions in place, our final two random floating point functions look like this:

High Precision Alternative Distribution

As mentioned near the beginning of the article, all of the above functions sacrifice possible precision that floating point formats are capable of representing for the sake of more perfect uniformity. If you would rather get the higher precision for numbers close to zero, but wish to still avoid division and take advantage of the techniques above, there is a way to do this.

The key is to use standard integer-to-floating-point conversion to turn a randomly generated integer into the nearest equivalent floating point representation, and then to subtract the appropriate amount from the floating point number’s exponent. By subtracting n from the exponent, this is essentially the same as dividing the number by 2n, but is faster, with the tradeoff that the cast operation is more complex than simple bit manipulations. This process can also be interpretted as converting a fixed point value to floating point.

Because this requires a back-and-forth interpretation of bits as integers and floating point values, it is something that needs to be done within the context of the type punning functionality introduced earlier.

These functions are capable of taking a full random sequence of 32 or 64 bits respectively, and converting them into floats and doubles with the extra precision required. Note that they check for the input equaling zero beforehand, because zero as a floating point number does not have the appropriate bitwise format to behave properly with the exponential subtraction and thus needs to be handled explicitly.

Also note that there is an extra function, the second one, that takes 64 bits as input but still only returns a single precision float. This would have been irrelevant with the earlier function, because only 23 bits were ever used or useful in that case. In this case, however, more bits can indeed be useful, for numbers that get closer to zero. For larger numbers, the lower bits will just get truncated away, while with smaller numbers, all the upper bits will be zero anyway, so the lower bits can be utilized for extra precision. Using 32 bits is better than nothing, because the extra 9 bits provide some degree of higher absolute precision for smaller numbers. But if you are willing to generate 64 bits, they can also be put to use. In fact, even 128 bits could be useful if you wanted your numbers to be able to get really tiny, but do not know what options there are on various platforms for quickly converting a 128-bit integer to a floating point number.

One final note: Because standard conversion is used, the floating point rounding rules come into play, so you should be aware of how they can affect the results. In particular, if you want to produce floating point numbers up to be not including one, you will need to use the rejection method to keep your random integers capped to a certain maximum, close to but a little less than the maximum possible for the number of bits being generated. Anything above this maximum would still produce a number technically less than one after the exponent subtraction is performed if an infinite-precision format were used, but in practice might be rounded up to one depending on the active rounding mode.

Using the Precise Method to Gain an Extra Bit

The original bit-manipulation method happens to leave a single bit of available precision unused. This is due to the fact that we are restricting ourselves to the precision offered within the range [1, 2) when we first assign the random bits combined with the fixed exponent, even though the range [0.5, 1) offers double the absolute precision, and ranges below that offer even more.

But trying to get access to this extra bit of precision is tricky, as it isn’t at the same position for all numbers. When the floating point subtraction is performed to shift the [1, 2) number into the [0, 1) range, a left bit shift is essentially being done on the mantissa, based on how small the exponent gets, which is in turn based on how close to zero the number gets. There will always be at least one bit’s worth of shifting, which is how the extra available bit is introduced, but before the shift happens there is nowhere to store that bit, and after the shift we cannot (quickly) know which bit to set.

The more precise method just introduced, of casting the integer to float and then reducing the exponent, gets around this problem, as the left shift occurs during the cast and has access to potentially more bits than will fit into the final mantissa. In the form presented above, we take full advantage of those extra bits, which both induces possible rounding to account for the excess of bits and loses the perfect uniformity. But if we are careful to limit the number of bits used, we can avoid both the rounding and the loss of uniformity.

To do so, we use just 24 bits for floats (one better than the 23 bits used by the first method), and 53 bits for doubles (instead of just 52 bits). Then we subtract 24 or 53 from the exponent, instead of the 32 or 64 when using all the available bits for maximum precision. When compared with the first method, we basically trade a floating point addition with a cast from integer to floating point, and gain a bit of precision in the exchange. Unfortunately, as we already saw with the casting method above, zero is a special case that I could find no clever way around, so it also introduces a conditional; one that is almost always true, but still a complication that could impact performance compared to the first method.

Performance Measurements

No post on writing high performance code is complete without some performance numbers, so I have written a quick program to test these functions out and compare them with their division-based counterparts. Out of curiosity, I also included the a few measurements of some C++11 utilities. It measures partially unrolled loops for around a given number of seconds, and then records the average nanoseconds required per call. It also includes a warmup phase at the begining to get all the caches prepared. The data below is from executing 16 calls per loop iteration, with a half-second warmup and ten second measurement period.

Here is the data from my x86-64 desktop machine with an AMD A10-7850K processor, with the benchmark application built by Visual Studio 2015 in 64-bit mode with settings configured to favor speed over size and to inline any suitable functions. The fourth column is just the time it takes to construct a floating point value, calculated by subtracting the average time it takes to just generate a single random integer (the first row in each table) from the total time of the operation. The fifth column is that same overhead, expressed as a percentage.

32-bit Operation Total ops/s Total ns/op Overhead ns/op Overhead %
rand_next32 630,128,871 1.587
rand_float_co 455,120,444 2.197 0.610 38.5%
rand_float_oc 435,561,244 2.296 0.709 44.7%
rand_float_oo 339,964,172 2.941 1.355 85.4%
rand_float_cc 239,279,454 4.179 2.592 163.3%
rand_float_co_cast 306,611,060 3.261 1.674 105.5%
rand_float_oc_cast 306,326,767 3.264 1.678 105.7%
rand_float_oo_cast 313,051,776 3.194 1.607 101.3%
rand_float_cc_cast 264,845,545 3.776 2.189 137.9%
rand_float_co_div 352,447,100 2.837 1.250 78.8%
rand_float_oc_div 350,996,447 2.849 1.262 79.5%
rand_float_oo_div 327,496,401 3.053 1.466 92.4%
rand_float_cc_div 363,223,592 2.753 1.166 73.5%
64-bit Operation Total ops/s Total ns/op Overhead ns/op Overhead %
rand_next64 622,868,711 1.605
rand_double_co 439,437,295 2.276 0.670 41.7%
rand_double_oc 446,584,345 2.239 0.634 39.5%
rand_double_oo 339,282,071 2.947 1.342 83.6%
rand_double_cc 250,406,659 3.994 2.388 148.7%
rand_double_co_cast 318,517,898 3.140 1.534 95.6%
rand_double_oc_cast 319,119,101 3.134 1.528 95.2%
rand_double_oo_cast 313,245,056 3.192 1.587 98.8%
rand_double_cc_cast 171,230,955 5.840 4.235 263.8%
rand_double_co_div 91,477,340 10.932 9.326 580.9%
rand_double_oc_div 133,058,891 7.515 5.910 368.1%
rand_double_oo_div 86,363,972 11.579 9.973 621.2%
rand_double_cc_div 141,402,060 7.072 5.467 340.5%

For comparison, here are times for generating random floats and doubles in the range [0, 1) using standard C++ utilities. Two of the rows just measure the raw bit generation of the std::ranlux24 and std::ranlux48 engines, while the other two include the use of std::uniform_real<T> to generate floating point values. This utility is substantially slower when compared to any of my custom methods (as implemented in Visual Studio 2015 anyway), though I would not be surprised if the distribution quality of the C++ utilities are in some way better; it’s not something I’ve investigated yet.

Standard Operation Total ops/s Total ns/op Overhead ns/op Overhead %
ranlux24 116,584,472 8.577
flt_co(ranlux24) 16,766,934 59.641 51.064 595.3%
ranlux48 98,892,721 10.112
dbl_co(ranlux48) 13,442,138 74.393 64.281 635.7%

These are the results on my laptop’s Intel Core i7-4700HQ, also an x86-64 machine, and using the exact same executable file for the benchmark application. Not only is it faster overall, but it also seems to suffer less overhead beyond the generation of bits when constructing the final floating point value. Might be the result of better pipelining behavior or floating point performance compared to my AMD processor?

32-bit Operation Total ops/s Total ns/op Overhead ns/op Overhead %
rand_next32 617,465,578 1.620
rand_float_co 533,629,167 1.874 0.254 15.7%
rand_float_oc 524,876,401 1.905 0.286 17.6%
rand_float_oo 416,715,822 2.400 0.780 48.2%
rand_float_cc 288,415,301 3.467 1.848 114.1%
rand_float_co_cast 308,418,977 3.242 1.623 100.2%
rand_float_oc_cast 315,350,235 3.171 1.552 95.8%
rand_float_oo_cast 393,888,418 2.539 0.919 56.8%
rand_float_cc_cast 276,605,718 3.615 1.996 123.2%
rand_float_co_div 322,211,971 3.104 1.484 91.6%
rand_float_oc_div 319,745,100 3.127 1.508 93.1%
rand_float_oo_div 312,832,881 3.197 1.577 97.4%
rand_float_cc_div 324,085,939 3.086 1.466 90.5%
64-bit Operation Total ops/s Total ns/op Overhead ns/op Overhead %
rand_next64 619,162,882 1.615
rand_double_co 525,873,328 1.902 0.287 17.7%
rand_double_oc 517,698,877 1.932 0.317 19.6%
rand_double_oo 412,640,202 2.423 0.808 50.0%
rand_double_cc 288,140,814 3.471 1.855 114.9%
rand_double_co_cast 404,888,226 2.470 0.855 52.9%
rand_double_oc_cast 405,158,394 2.468 0.853 52.8%
rand_double_oo_cast 414,983,760 2.410 0.795 49.2%
rand_double_cc_cast 235,530,408 4.246 2.631 162.9%
rand_double_co_div 216,825,435 4.612 2.997 185.6%
rand_double_oc_div 216,306,277 4.623 3.008 186.2%
rand_double_oo_div 214,925,793 4.653 3.038 188.1%
rand_double_cc_div 220,499,270 4.535 2.920 180.8%
Standard Operation Total ops/s Total ns/op Overhead ns/op Overhead %
ranlux24 113,273,936 8.828
flt_co(ranlux24) 20,950,600 47.731 38.903 440.7%
ranlux48 114,318,182 8.748
dbl_co(ranlux48) 23,091,134 43.307 34.559 395.1%

The overall pattern is that the primary bit-manipulating method introduced tends to be fastest, while the cast and division methods differ based on the strength of the CPU’s floating point capabilities. Also, the half-open ranges tend to be faster because they can more often be performed without any branches, though the division method is more consistent due to giving up perfect uniformity in favor of simple branchless implementation for all four of the ranges.

And I must state the usual caveat that effective benchmarking is hard, and I’m no expert. I repeatedly wound up with strange results and multiple times rewrote parts of the measurement code trying to make things more reliable. In particular, making the compiler perform all legitimate optimizations while not eliminating what it thinks is dead code is awkward. I ended up using an exclusive-or accumulation that ultimately gets printed out along with the timing results, so that all computations must be performed, with the hope that the exclusive-or operation itself would have a negligible effect on the timing. I also had to forcibly prevent a couple of functions (rand_probability) from getting inlined, because that was causing a rarely executed slow path to be optimized in a way that harmed the performance of the fast path.

Because I originally did this work for my Make It Random Unity asset, it was originally in C#, running on Unity’s forked version of the Mono compiler and runtime. Within that environment, I definitely benefited from the engineering effort of implementing (and where necessary figuring out) these techniques. But I never directly measured them against the division method, since I had purposefully started out wanting to get the cleaner uniform distribution. Nor have I yet tested the casting method that gains an additional bit of precision, because I only devised that method in the course of wrirting this post. I did measure them extensively against equivalent methods in UnityEngine.Random and System.Random, since that was my practical competition, but the differences there extend well beyond the process of turning random bits into a floating point value.

Full C++ Source for Benchmark

Below I have included the full source code of the benchmark application (plus some additional validation code), if you would like to try it on your own system. I included some additional C++ features that are not part of the code above, such as packaging the random engines into structs rather than using a global state in order to easily multithread the validation code, but all the operational details are identical.

No Comments

Leave a comment