Embed with Elliot: Keeping it Integral


If there’s one thing that a lot of small microcontrollers hate (and that includes the AVR-based Arduini), it’s floating-point numbers. And if there’s another thing they hate it’s division. For instance, dividing 72.3 by 12.9 on an Arduino UNO takes around 32 microseconds and 500 bytes, while dividing 72 by 13 takes 14 microseconds and 86 bytes. Multiplying 72 by 12 takes a bit under 2.2 microseconds. So roughly speaking, dividing floats is twice as slow as dividing (16-bit) integers, and dividing at all is five to seven times slower than multiplying.

There’s a whole lot of the time that you just don’t care about speed. For instance, if you’re doing a calculation that only runs infrequently, it doesn’t matter if you’re using floats or slow division routines. But if you ever find yourself in a tight loop that’s using floating-point math and/or doing division, and you need to get a bit more speed, I’ve got some tips for you.

Some of these tips (in particular the integer division tricks at the end) are arcane wizardry — only to be used when the situation really calls for it. But if you’re doing the same calculations repeatedly, you can often gain a lot just by giving the microcontroller numbers in the format it natively understands. Have a little sympathy for the poor little silicon beasties trapped inside!

Float Avoidance

You can often completely avoid floating point numbers just by thinking a little bit. Simply seeing a number with a decimal place is no reason to pull out the floats. For instance, if you have a quantity, like the temperature on an LM75-style temperature sensor, that’s accurate down to 1/8 of a degree, you absolutely positively don’t need floating point numbers. How can you deal with 21.125 degrees without floats? Exactly the same way the LM75 does — it pre-multiplies everything by 8.

Avoiding floating point numbers is often as easy as figuring out what the natural units of the problem are. Is that $12.34 or is it 1,234 cents? I can only cut with a hand saw accurately to about a millimeter. If I measure board lengths in millimeters, I’ll never need a decimal point. Sure, the numbers get bigger, which means that you may need to use the next-largest integer storage size, but microcontrollers don’t mind that nearly as much as floating point math.

So if you’re taking a reading from a 10-bit ADC, and it’s range is 3.3V, the last thing you want to do is convert that directly to volts after measurement (even though it might make your code read nicer.) Keep the ADC value around — it’s a nice 10-bit integer — and only convert out to actual volts when you absolutely need to, after all the other math has been done. And when you do, note that the right units for your problem are in terms of the minimum-possible measured difference, which is 3.3 V / 1023 = 3.23 millivolts. When you need to convert to physical units for the end-user, you’ll multiply by this value.

Try redefining units of measurement any time you’re running into fractions or floating point numbers. You’ll be surprised how often you didn’t need them in the first place. There’s one warning here though: be careful when converting back to the desired units. A square centimeter isn’t 10 square millimeters, it’s 100. Each “millimeter” in the conversion brings a power of ten with it. And ten millimeters divided by one millimeter is just ten, with no units left, and no need for conversion.

Fixed-Point Math

FixedPointNumbers

While simply getting away from the slower float type can be enough of a speedup for some cases, there’s an extra layer of optimization available, should you need it. A common choice, for instance, on small microcontrollers is to simply multiply all values by 256. This is called “8:8 fixed point” notation, because you’re using eight bits for the fractional part, and eight for the integer part.

It turns out that if you know the number of bits in the fractional part, you can optimize multiplications and divisions to run even faster than a 16-bit multiplication or division. Here’s a nice writeup of fixed-point math options for the Arduino, for instance. You can read up on a bunch of these optimizations in this Atmel application note. And note that the algorithms aren’t specific to any platform.

So when you don’t need arbitrary fractions, fixed-point libraries can be a real speedup over using floating-point routines.

Floating Point Numbers Are Lousy Anyway

There’s a fairly well-known problem with floating point numbers: you have to be very careful when testing them for equality. That’s because they’re all not represented exactly. For instance, 0.1 isn’t representable exactly in floating point, so if you add ten of them up, it’s not going to be exactly 1.0 either. If you were to test sum10(0.1) == 1.0 it would end up false. That’s lousy.

The other thing that’s peculiar about floating point numbers is due to the way they’re constructed. Floating point numbers are like scientific notation (1.23 x 103) except in binary. IEEE 754, which most compilers follow, uses 24 bits for the first number — the number of significant digits — and 7 bits for the exponent. This lets you represent both really large and really small numbers. But because there is the same number of significant digits everywhere, the gaps between representable numbers is very small for small numbers and doubles with each power of two in the exponent.

For example, 123456789.0 + 1.0 == 123456789.0 will be true because the spacing between the 24 bits of significant digits is larger than 1. 123456789 + 1 == 123456789 will be false if you use integers, because the spacing between integers is always one exactly.

In short, integers are exact, and they keep the same distance between representable values throughout their range. I find this often corresponds very nicely to the kind of inputs and outputs I use with microcontroller projects. Floating point numbers, on the other hand, have a much wider range, but do so at the expense of growing distances between possible values. And when floats can’t represent numbers exactly, comparisons are not going to work exactly.

Joyless Division

Most of the lower-end microcontrollers (again, including the AVRs used in the Arduino) don’t have hardware division built in. This means that they will be using some kind of iterative algorithm to find the answer. This takes a while. There is a great shortcut, at least for integers, that you should know about. In binary, dividing by n powers of two is the same as dropping the least-significant n bits off the number. x >> 2 bit-shifts x over by two places, effectively dividing by four. It’s just like dividing by ten in base-10. Drop the last digit.

This type of bit-shift division (and its dual, multiplication) is optimized in for you by AVR-GCC, so you don’t actually have to think about it. But it ends up with the odd situation that x/4 runs extremely quickly, while x/5 can take a bit longer. If you can pick your divisors, you should always pick them to be powers of two. If you were going to average ten values together, your algorithm will run a lot faster if you only use eight observations, and may not be all that much worse if you use sixteen.

Integer Division Tricks

Image: minifigure.org
Image: minifigure.org

Which brings us finally to the promised dark wizardry — two nasty tricks that you can use when you need to divide integers in a hurry. The major caveat is that these techniques only work when the divisor is constant. You have to know what it is ahead of time to use this. But when you need to divide by thirteen over and over again, here’s two ways to get you there as quickly as possible.

Multiply then Divide

The basic idea here is to pre-compute your division as a multiplication and a bit-shift. In the same way that you could divide by two by first multiplying by five and then dropping the least-significant digit (effectively dividing by ten), you can pre-compute the binary multiplier and number of bits to drop. The idea was laid out in this post by Douglas Jones, and then coded up here.

In particular, to divide any 8-bit number by thirteen, you first multiply the number by 158 (resulting in a 16-bit integer) and then drop the least-significant eleven bits. Yeah, right? It works because 158/211 ~= 1/13. It’s just math and it’s going to run about as fast as possible, but it’s not entirely readable.

Lookup Table

If you really, absolutely need to divide by thirteen in a hurry, there’s one way that’s faster than the above trick: pre-computing all the possible values beforehand and storing them in an array. For instance, with 8-bit numbers, create a table of all possible 256 values divided by thirteen. We’ll call it thirteen_table[]. Then when you need to know what 123 / 13 is super quickly, just look up thirteen_table[123] and you’re done. It’s cheating, and it uses 256 bytes of memory, but if this trick gets you out of a sticky situation, it’s worth knowing.

Premature Optimization

There’s a maxim of Donald Knuth’s, “premature optimization is the root of all evil“. The basic argument is that if you always write your division routines aiming for maximum speed, you might end up with an unreadable mess of code for very little benefit. Maybe the right place to spend your optimization time was making sure that two different subsystems could run concurrently, or something else.

So take all of the above with a grain of salt. Thinking your units through, and figuring out if it’s possible to re-cast them into integers, is probably always a good idea if you don’t have hardware floating-point math. Swapping up arbitrary divisors for a power of two is also pretty much always a win, if you’re already using integers.

Whether you want to take the next step to a fixed-point library, or are able to use the dirty tricks for fast division by constants, depends on your situation and your needs. If it over-complicates your code and the CPU has lots of free time, or if it’s in a non-time-critical section, don’t bother.

What other dirty math tricks should we know about? What’s your favorite or most-hated? Integer vs. floating point? Let us know in the comments!



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *