You’ll probably have one or more ints in almost every method and class you write. They’re everywhere so you really should know how to use them.

Knowing how to use the int data type means you need to understand the limits and consequences of using a fixed number of binary digits. The previous episode explained what types of ints exist and their lengths. This episode explains how negative numbers play a huge role when working with ints and an interesting security vulnerability that can result when you don’t properly use ints.

The math that computers perform with your variables is different than in real life. Because the computer can’t just add extra digits when it needs to. If I asked you to add 5 plus 5 five on paper, you’ll need to start using a new digit to represent 10. If you limited your view to just that single digit, then you might think that 5 plus 5 is equal to 0. If you had reached your maximum number of bits, then this extra need is called an overflow and will cause the result to wrap around back to the beginning.
And computers are also different because they work with the full word size even for small values. In real life, this would be like every time you wanted to write the number 5, you instead always wrote 0,000,000,005. Adding leading zeros doesn’t change the value. At least in real life.

And here’s another aspect where computers are different yet again. A computer doesn’t have a place to put a little negative sign so instead it uses two’s complement to represent negative numbers and that means it uses the most significant bit to signal if a number is negative or not.

This causes small negative numbers to appear just like large unsigned numbers. You have to know ahead of time how you want to interpret the bits.