fbpx

Be careful with floating point numbers when building games.

You could be in for some big surprises if you don’t understand how computers work with fractional values. You might be able to write an app to keep track of your friend’s addresses without using any floating point numbers. But most games will need math that uses fractional values.

That is if you want any kind of game with smooth motion. If your game consist of typing letters in words, then maybe you won’t have to worry about this. But if you want those letters to move slowly across the screen as the user drags them into place, then you’ll need to use fractional values.

You might think that you can get away from this by just having really small whole numbers. Maybe you track individual pixels. The problem is that computers are really fast these days and can do a lot of work even between pixels. You might find that the computer tries to move the image by 1 thousandth of a pixel. If you only work with whole numbers, then this becomes zero. And your image will be stuck without moving at all because the computer is so fast that it’s always trying to move it by a tiny amount.

Instead of limiting yourself to whole numbers, you’ll need to understand how to use fractional values. And that means you need to understand how to use floating point numbers. You should also listen to episode #112 about the float data type, or you can read the full transcript below. I try not to repeat information between episodes so you’ll find both this episode and the earlier one have something for you.

Transcript

You could be in for some big surprises if you don’t understand how computers work with fractional values. You might be able to write an app to keep track of your friend’s addresses without using any floating point numbers. But most games will need math that uses fractional values.

That is if you want any kind of game with smooth motion. If your game consist of typing letters in words, then maybe you won’t have to worry about this. But if you want those letters to move slowly across the screen as the user drags them into place, then you’ll need to use fractional values.

You might think that you can get away from this by just having really small whole numbers. Maybe you track individual pixels. The problem is that computers are really fast these days and can do a lot of work even between pixels. You might find that the computer tries to move the image by 1 thousandth of a pixel. If you only work with whole numbers, then this becomes zero. And your image will be stuck without moving at all because the computer is so fast that it’s always trying to move it by a tiny amount.

Instead of limiting yourself to whole numbers, you’ll need to understand how to use fractional values. And that means you need to understand how to use floating point numbers. They’re called floating point because the decimal point that separates the whole number portion from the fractional number portion can move around as needed.

Actually, I completely made up that last part. It seems reasonable to me and this is how I’ve always thought of floating point numbers. But I really have no idea where the name comes from.

So I don’t repeat earlier content, you’ll want to listen to episode #112 where I described the float data type. In this episode, I just wanted to bring the topic to your attention again. Especially, since I’m describing game development topics and some of the math you’ll need to use.

Early computers were really good with adding, subtracting, multiplying, and dividing whole numbers. They still are. You can represent all the values 0 up through the maximum value without missing any numbers. I mean, it would be really bad if computers, say, had trouble with 8. To a computer, all the numbers are just a series of binary 1’s and 0’s. They can even represent negative numbers through something called two’s compliment. This is where you flip all the bits and then add 1.

So if you start out with the value 1 and want to make it negative, then you first flip all the bits. This gives you all 1’s except for the least significant bit which started out as one and is now 0. Then when you add one, even that last zero becomes a one. You end up with the value -1 being all 1’s.

You need a different system to represent fractional values. And early computers didn’t agree on how this should be done. The computer engineers of that time, and I’m talking about sometime around the 1970’s, decided that a standard was needed. So the IEEE 754 standard was created to describe how floating point values would be represented in computers.

These early computers had a standard now, but working with floating point numbers was slow. Any operation required the main processor to break up the floating point number into pieces and calculate them separately and then put everything back into the standard format for the answer.

It wasn’t anything like flipping some bits. Although, one nice thing about floating point values is there’s a single bit that holds the sign of the number. So something as simple as changing the number 1.0 into -1.0 and back again can now be done by turning a single bit on and off.

Let me also sidestep here for a moment. The number 1 is a whole number. It’s an int. The number 1.0 is a floating point number. Just because the values after the decimal point are all zeros doesn’t turn it into a whole number. Sure, you can always convert between whole numbers and their floating point equivalents. But going back to whole numbers might cause you to lose some bits. Going from 1.0 to 1 won’t lose anything. But those are specific examples. In general, a language like C++ won’t let you convert from a floating point value into an int value unless you tell it that it’s okay to do so.

Back to the early computers. We have a standard but it’s still slower to work with floating point numbers than it is to work with integer numbers. Unless your computer had extra hardware that also understood the IEEE 754 standard. By the time I was buying my first computer in the 1980’s, there was an empty socket in the computer motherboard where an additional chip could be place to help speed things up a bit. This was called a numeric co-processor.

For Intel processors, if your computer had an 8086 processor, then you could buy an 8087 math co-processor and plug it into the empty socket. If you had a 80186 processor, then there was a 80187 math co-processor. Intel keep this pattern going for some time. I’m not sure exactly how long. Eventually, they stopped selling separate math co-processors and started putting everything on a single chip. Now, we don’t even think about the extra work needed to understand and operate on floating point values. We just know that computers are really fast with working with any type of number.

The math co-processors that are built into modern computer CPUs are able to work with floating point values much faster than ever.

But they still have to treat some values as approximate. In other words, a value like 1.1 which we can represent exactly in decimal is slightly off from the exact value of 1.1. It’s close but not exact. This is just like how we can’t express 1/3 as a decimal value of 0.3333 exactly either. The three’s need to keep going forever. Computers don’t have forever. And the IEEE standard defines a fixed number of bits available. Once the bits run out, the value stops being as precise as we might expect.

Now, we’re already familiar with values like 1/3 being imprecise. We work with base 10 or decimal. It turns out that having ten fingers for our counting is actually very fortunate. Because what are the factors of 10? By that, I mean, what whole numbers can be multiplied together to get 10? Well, 1 times 10 equals 10. And so does 2 times 5. Three doesn’t go into ten evenly. And that’s why one third can’t be represented easily in base 10. Even though four is not a factor, it’s a multiple of 2. So one fourth is no problem. One fifth is no problem because five is a factor of ten. One sixth, one seventh, and one ninth will all give problems.

Because 1, 2, 5, and 10 are all factors of 10, then any fractions based on these values can be represented exactly in base 10. So a value like 1.1 is really 1 and 1 tenth. That 1 tenth part is what I mean about going into ten evenly. What about 1.2? Well, this is 1 and 2 tenths, or you can also say it’s 1 and 1 fifth. Again, it goes into ten evenly.

But binary is base two. What are the factors of 2? Just 1 and 2. We no longer have evenly representable values of tenths available as exact numbers in binary. All we can really do in binary with exact precision is divide by 2.

For the most part, this is okay. The IEEE standard gives us enough bits that 1.1 can be represented by a number that’s really, really close. Just be aware that it’s not exact.