If you need floating point values but find the accuracy of floats and even doubles to cause problems, then consider using the decimal type.

But first, what’s the difference between accuracy and precision? Are they interchangeable? It’s easy to get them confused. If I told you that I was 35 years, 2 months, 10 days, and 4 and a half minutes old, then I’m being very precise but not very accurate. If instead, I said that I’m about 45 years old, then I’m nowhere near as precise but much more accurate.

Integers don’t give you the ability to handle fractional values. For that, you can start with the float type which gives you precision to 7 digits and limited accuracy. In exchange for this loss of accuracy, you get a much extended range. In scientific notation, you can represent numbers in float up to 10 to the 38th power or as small as 10 to the -45th power. Those are huge and extremely tiny numbers. They’re just not very accurate. And they’re not very precise either. Sure, you can have a number with 38 digits but you only get to specify 7 of those digits.

You can move to doubles and get a lot more precise now with 28 or 29 digits and an even bigger range. A double can represent numbers with over 300 digits. That’s mind boggling. But again, you only get to be precise with 28 of those digits.

Both floats and doubles give you increased range over integers but are not precise over their entire range. And both floats and doubles struggle to represent accurately values that we expect to be simple. This is because they use a different base two representation while we normally work with numbers in base ten. Some numbers will always be approximations in both systems such as one third.

The decimal type is a floating point type but it uses the same base ten as we do in real life. Floating point numbers have a mantissa and an exponent. Each of these is a number stored in binary. What makes decimal types different from either floats or doubles is how the exponent is interpreted.

If there’s a couple particular values that floats and doubles struggle with that causes more problems than any other, I’d say it has to be the tenths and hundredths. That’s 0.1 and 0.01. Because, just think about it, our whole currency system is based on cents. We divide a dollar into 4 quarters, into 10 dimes, into 20 nickels, and into 100 pennies. And every single one of these causes problems for floats and doubles. In fact it causes so many problems, that you really shouldn’t use either floats or doubles for calculating money.

The decimal type is much better suited to keeping track of numbers the way we expect. So if it’s so good why not just use it all the time? Well, it’s not nearly as fast as floats and doubles. Your computer has hardware support for calculating floats and doubles while the decimal type needs to perform all its calculations in software. It’s a lot slower.