If you need floating point values but find the accuracy of floats and even doubles to cause problems, then consider using the decimal type.

But first, what’s the difference between accuracy and precision? Are they interchangeable? It’s easy to get them confused. If I told you that I was 35 years, 2 months, 10 days, and 4 and a half minutes old, then I’m being very precise but not very accurate. If instead, I said that I’m about 45 years old, then I’m nowhere near as precise but much more accurate.

Integers don’t give you the ability to handle fractional values. For that, you can start with the float type which gives you precision to 7 digits and limited accuracy. In exchange for this loss of accuracy, you get a much extended range. In scientific notation, you can represent numbers in float up to 10 to the 38th power or as small as 10 to the -45th power. Those are huge and extremely tiny numbers. They’re just not very accurate. And they’re not very precise either. Sure, you can have a number with 38 digits but you only get to specify 7 of those digits.

Both floats and doubles give you increased range over integers but are not precise over their entire range. And both floats and doubles struggle to represent accurately values that we expect to be simple. This is because they use a different base two representation while we normally work with numbers in base ten. Some numbers will always be approximations in both systems such as one third.

The decimal type is a floating point type but it uses the same base ten as we do in real life. Floating point numbers have a mantissa and an exponent. Each of these is a number stored in binary. What makes decimal types different from either floats or doubles is how the exponent is interpreted.

Listen to the full episode about the decimal type or read further for the full transcript below.

Transcript

I realized that I didn’t fully explain the difference between accuracy and precision in episode 112 about floats. This is a good time to revisit that because I’m going to explain a different floating point data type that your language might have. Not all languages have the decimal type and while it looks like just another floating point type with more bits, that explanation would be very wrong.

But first, what’s the difference between accuracy and precision? Are they interchangeable? Sometimes, I might use one when I should use another especially when I’m talking about a different topic. It is easy to get them confused. I’ll try to avoid that at least in this episode.

If I told you that I was 35 years, 2 months, 10 days, and 4 and a half minutes old, then I’m being very precise but not very accurate. If instead, I said that I’m about 45 years old, then I’m nowhere near as precise but much more accurate.

When you need to work with numbers, you can choose to work with integers in all their various sizes and each type is very accurate and precise only to whole numbers within the range of either a short, an int, a long, or a long long. The bigger the int and the more bits you have to work with, then the more precise you can be even at large values. You can easily represent an accurate number in the billions and add and subtract small whole numbers and remain accurate and precise.

But ints don’t give you the ability to handle fractional values. For that, you can start with the float type which gives you precision to 7 digits and limited accuracy. In exchange for this loss of accuracy, you get a much extended range. In scientific notation, you can represent numbers in float up to 10 to the 38th power or as small as 10 to the -45th power. Those are huge and extremely tiny numbers. They’re just not very accurate. And they’re not very precise either. Sure, you can have a number with 38 digits but you only get to specify 7 of those digits.

You can move to doubles and get a lot more precise now with 28 or 29 digits and an even bigger range. A double can represent numbers with over 300 digits. That’s mind boggling. But again, you only get to be precise with 28 of those digits.

Both floats and doubles give you increased range over integers but are not precise over their entire range. And both floats and doubles struggle to represent accurately values that we expect to be simple. This is because they use a different base two representation while we normally work with numbers in base 10. Some numbers will always be approximations in both systems such as one third.

You actually sometimes have another choice for working with floating point values that resembles the accuracy we expect. You no longer have to worry about 1.1 plus 2.2 somehow not equaling 3.3. The decimal type is here to save the day. I’ll explain more right after this message from our sponsor.

( Message from Sponsor )

The decimal type is a floating point type but it uses the same base 10 as we do in real life. Don’t get me wrong, computers still store numbers in binary. But floating point numbers have a mantissa and an exponent. Each of these is a number stored in binary. What makes decimal types different from either floats or doubles is how the exponent is interpreted.

For a normal number in base 10, we can convert this to scientific notation by just taking the significant digits and moving the decimal point to a known location. Then the exponent part keeps track of how far and in what direction the decimal point was moved. Let’s take a simple example of the number 123. The decimal point comes after the 3 because 123 can also be written as 123.0. So scientific notation says to move the decimal point so there’s just one digit to the left. For the number 123, we move the decimal point so the mantissa becomes 1.23. Then because we moved the decimal point 2 places to the left, the exponent becomes 2. If instead, we had to move the decimal point the other way for a small number, then the exponent would be negative. Let me give you an example of that. Let’s take a tiny fraction of a number 0.0123. We do the same thing and move the decimal point so there’s just one significant digit to the left of the decimal point. That gives us the same 1.23 as before. But this time, because we had to move the decimal point to the right by 2 places, then the exponent becomes -2. If you get these confused, just remember that positive exponents are used for numbers greater than 1 and negative exponents are used for fractional numbers between 0 and 1. I didn’t explain how to deal with negative numbers themselves but the same process applies.

I also don’t want this to turn into a math exercise, but it’s kinda necessary a bit in order to understand the decimal data type, how it’s different from floats and doubles, and when to use it.

Alright, so that’s how scientific notation works. Floats and doubles follow a similar approach except they don’t work with moving the decimal point like we’re used to. They work with an exponent that’s a power of 2 instead of a power of 10. This is why they struggle with some values that are easy in decimal. Both systems will struggle with values such as one third. But a system based on a power of 2 exponent will need to approximate more values than a system based on a power of 10. Here’s where I’m really going to draw the line. If I try to explain why, then we’re really in for a math lesson and it involves things such as prime number factors. I don’t know about you, but factoring numbers was not one of my favorite things in school. Luckily, I’ve never had to do this in my software career.

If there’s a couple particular values that floats and doubles struggle with that causes more problems than any other, I’d say it has to be the tenths and hundredths. That’s 0.1 and 0.01. Because, just think about it, our whole currency system is based on cents. We divide a dollar into 4 quarters, into 10 dimes, into 20 nickels, and into 100 pennies. And every single one of these causes problems for floats and doubles. In fact it causes so many problems, that you really shouldn’t use either floats or doubles for calculating money.

Imagine if you worked hard, saved up a lot of money, and every single time you made a purchase through your bank, they managed to mess up 30 cents here, 49 cents another time, 71 cents just gone when you bought lunch, etc. That’s what you’ll get if you try to use floats or doubles with money.

The decimal type is much better suited to keeping track of numbers the way we expect. So if it’s so good why not just use it all the time? Well, it’s not nearly as fast as floats and doubles. Your computer has hardware support for calculating floats and doubles while the decimal type needs to perform all its calculations in software. It’s a lot slower.

The decimal type also lacks the extreme range of the double type. If you really need to keep track of numbers bigger than there are atoms in the universe, then you’ll have to use a double. That’s just fine because nobody can count the atoms to know how accurate you are anyway. But for smaller ranges. And by smaller, I don’t mean to imply that the decimal type is small. It’s still huge. It’s nowhere near the number of atoms in our solar system let alone the universe. But it’s big enough to handle practically anything you could want. And the decimal type is accurate in all the ways we expect. As long as you have a little extra time to spend waiting for the answer.

Feedback

What's on your mind?
On a scale of 0 to 10, how likely are you to refer us to friends?